perm filename LISP[LIB,LSP] blob sn#298410 filedate 1977-07-27 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00039 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00006 00002	.device xgp
C00008 00003	.begin center
C00009 00004	.ONCE CENTER 
C00016 00005	Lists. Structured data can be represented in LISP as lists.  A list is written
C00024 00006		These examples illustrate the considerable versatility of lists in
C00030 00007	.next page
C00037 00008		Although we shall encounter expressions such as "≡≡(PLUS x y)≡," these
C00043 00009		LISP supports fixed point (integer) and floating point (real)
C00050 00010	Append.  The function ≡≡append≡ joins several lists into one long list whose
C00058 00011		This definition of ≡≡append≡ depends on being able to reduce the problem
C00066 00012	And or not.  The three truth functions are ≡≡not≡, ≡≡and≡, and ≡≡or≡.  We
C00073 00013		The rationale for the λ-calculus is often represented as follows.  It
C00079 00014	.skipbegin flush right
C00085 00015	.example
C00091 00016		(The following is mainly for those interested in the mathematics of
C00100 00017	.begin indent 40,40 nofill
C00102 00018	.once center
C00105 00019	Example 2. Implementing a fragment of APL with a fragment of LISP
C00112 00020		Combining ≡≡mapcar≡ with ≡≡apply≡ can lead to some powerful
C00118 00021	.begin nofill group
C00122 00022	.once center
C00130 00023		A particularly important use is in the permanent definition of
C00136 00024	.next page
C00141 00025	.next page
C00152 00026	.next page
C00158 00027	implementation is possible in general for functions, and we have therefore
C00163 00028				 PROGRAM VERIFICATION IN LISP
C00165 00029
C00171 00030					IMPLEMENTATION
C00179 00031		The predicate ≡≡atom≡ checks that the car of the location pointed to
C00185 00032	reasons motivating the use of environments in the first place.  Hence instead
C00192 00033	to be applied to %31%*.  That function should be ≡≡\x.1≡, the constant
C00198 00034	0                                          y=0,x=0,y=1,x=1≡
C00205 00035
C00212 00036
C00219 00037	parser and his LUNAR question-answering system for moon rocks, and Winograd's
C00224 00038	.ONCE CENTER BIBLIOGRAPHY
C00227 00039	β
C00228 ENDMK
C⊗;
.device xgp
.font 1 "31vr.fnt[fnt,jp]";
.font 2 "31vri.fnt[fnt,jp]";
.font 3 "bb.fnt[fnt,jp]";
.turn on "%{"
.at "≡≡" TXT "≡" ⊂ "%3TXT%*" ⊃
.at "⊗⊗" TXT "⊗" ⊂ "%2TXT%*" ⊃
.macro example  ⊂
.	skip; begin nofill; select 3;
.       ⊃
.every footing(,{PAGE},)
.every heading(V.R. Pratt,LISP,{date})


.begin center
LISP
Vaughan R. Pratt
M.I.T.
July 4, 1977
.end





<This is an article to appear in the Encyclopaedia of Computer Science and
Technology, eds Belzer et al.  This article is still not quite complete, but is
being made generally available to those expressing interest, to permit on-line
feedback.  Please note any overlong sentences, too much pedantry, or overly
mathematical descriptions, weaknesses of mine when I'm not paying attention.
.begin center
LISP

Vaughan R. Pratt

M.I.T.
July 4, 1977

CONTENTS
.end

.begin verbatim
INTRODUCTION
ATOMS AND LISTS
QUOTING AND DENOTING
FUNCTIONS, PREDICATES AND FUNCTIONALS
EXAMPLE LISP PROGRAMS
THE LISP PROCESSOR
INPUT-OUTPUT
MISCELLANEA
THE MATHEMATICS BEHIND LISP
PROGRAM VERIFICATION IN LISP
IMPLEMENTATION OF LISP
APPLICATIONS
BIBLIOGRAPHY
.end
.next page
.ONCE CENTER 
INTRODUCTION

	The programming language LISP was conceived by John McCarthy at M.I.T.
in 1959.  The name is an acronym for LISt Processing language.  Originally LISP
was seen as supplying a list-processing capability to the Artificial
Intelligence community.  A more modern view is that LISP has two facets. 
First it is a tool whose use within AI goes well beyond the data type "list" and
whose applicability to non-AI problems is slowly becoming better appreciated. 
Second, it has been the most influential language in applying ideas from
mathematical logic to programming language design, much as APL has
borrowed from linear algebra.

	There is no one source of authority on what constitutes LISP.  In this
respect LISP is unlike FORTRAN, for example, for which an ASA (American
Standards Association) standard exists.  The original system, LISP 1,
documented in 1960 by McCarthy ⊗⊗et al⊗, was soon updated to LISP 1.5, documented
in 1962 by McCarthy, Abrahams, Edwards, Hart, and Levin.  This formed the basis
for an introductory text on LISP 1.5 by C. Weissman.  The numbering 1.5 was in
deference to the expected appearance of LISP 2, which was to offer algebraic
notation and better numerical capability.  This project fell through in part
for lack of funding.

	The major extant implementations are INTERLISP (formerly BBN-LISP, now
maintained chiefly at XEROX Palo Alto Research Center), MACLISP (an M.I.T.
offspring of LISP 1.5), and UCI-LISP (an extension of Stanford's LISP 1.6
developed at the University of California at Irvine).  There are also a number
of other implementations of LISP in various parts of the world, including one
at IBM's T. J. Watson Research Center used primarily for the Scratchpad symbol
manipulation system, and another at the Electrotechnologica Laboratory (ETL) in
Japan.  The following account of LISP gives a fairly accurate picture of the
common features of the major implementations.

	While it is customary to introduce a programming language by exhibiting
examples of its programs rather than its data, LISP is distinguished from other
widely used programming languages by the fact that LISP programs ⊗⊗are⊗ LISP data. 
Thus in beginning with LISP data we will also be treating some aspects of how
one writes LISP programs.  In particular, the rules for writing LISP data
are among the rules for writing LISP programs.

.next page
.once center 
ATOMS AND LISTS

	The most elementary LISP data are atoms.  An atom is written as a
sequence of printing characters, namely the letters ≡≡A-Z≡, the digits ≡≡0-9≡, and
.TURN ON "α"
the "glyphs" ≡≡!#$α%&←=@↑:+-*<>?≡ (give or take a glyph from one implementation to
.TURN OFF "α"
the next).  Implementations vary on whether lower-case letters are
distinguished from upper-case; here we simply avoid lower-case letters in atom
names.

.TURN ON "α"
⊗⊗Examples of Atoms.⊗  The following are all atoms:   ≡≡3, -40, 3.1415926535,
6.023E23, T, NIL, X, INCOME-TAX, PLUS, YOU!#$α%@, +, =, <-->.≡
.TURN OFF "α"

⊗⊗Numbers and Symbols.⊗  The first four atoms in the above examples are numbers
and the rest are symbols.  T and ≡≡NIL≡ are truth values (respectively ⊗⊗true⊗ and
⊗⊗false⊗).  A number is an atom written with digits, together with any or all of a
leading + or -, a decimal point, and an %3E%* meaning "times ≡≡10≡ to the".  

⊗⊗Use of atoms⊗.  Numbers and truth values need no defense in programming. 
Symbols other than %3T%* and ≡≡NIL≡ are useful as variables in LISP programs. 
More importantly from the applications point of view, symbols in LISP data are
useful in programs performing symbolic, as opposed to numeric, computation,
such as simplifying algebraic formulae, proving theorems automatically,
processing natural language text, scheduling classes, playing chess, and so on. 
(It is not altogether surprising that LISP was developed at an Artificial
Intelligence laboratory.)
⊗⊗Lists⊗. Structured data can be represented in LISP as lists.  A list is written
in parentheses and the items in the list are separated by spaces.  Thus ≡≡(KING
LEAR)≡ is a list of two atoms.  

⊗⊗Nested lists⊗.  A list need not be merely a list of atoms.  It may contain lists
as well.  Thus ≡≡((A B) C (D E))≡ is a list of three things, the list ≡≡(A B)≡,
the atom %3C%*, and the list ≡≡(D E)≡.  Such lists in turn may contain lists. 
Even the unlikely object ≡≡(((((A)))))≡ is a list; it is a list of a list of a
list of a list of a list of an atom.  The atom %3A%* is said to be ⊗⊗nested⊗ to depth
%35%* in that list while the list ≡≡((A))≡ is nested to depth 3.

≡≡NIL≡.  A list need not have any elements.  That is, ≡≡()≡ is a permissible
list.  ≡≡()≡ is also known as the atom ≡≡NIL≡.  ≡≡NIL≡ is the only LISP datum that
is both an atom and a list.

	Had the ancient Greeks invented LISP, ≡≡NIL≡ would have been given
as short shrift as %30%*, which along with 1 was not considered a number
because numbers were supposedly for dealing with pluralities.  The more
modern view is that %30%*, and by the same token ≡≡NIL≡, play an important role
in the basis case of inductive arguments about respectively the set
.TURN ON "α"
≡≡α{0,1,2,...α}≡ of all natural numbers and the set of all lists.  The section
.TURN OFF "α"
on program verification contains a more detailed treatment of this issue.

⊗⊗Lists of indefinite length⊗.  We often need to refer to lists of indefinite
length.  We write "≡≡(a ... z)≡" to denote an arbitrary list, "≡≡(a b ... z)≡" to
denote one with at least one element, "≡≡(a b c ... z)≡" one with at least two
elements, etc.  (As a mnemonic for this unmemorable convention, delete the
first and last elements, which will always be present, and count what is left.) 
The convention will also apply to "≡≡plus[a,b,c,...,z]≡" and "≡≡a+b+c+...+z≡,"
both informal (non-LISP) expressions denoting the sum of at least two numbers,
and similarly for other functions.

⊗⊗Use of lists⊗.  Lists supply a useful way to impose structure on data.  For
example, the structure inherent in the expression "King Lear is a man" may be
captured in the list ≡≡((KING LEAR) IS (A MAN))≡.  The mathematical expression
"≡≡f[x,y]≡" may be expressed in LISP as ≡≡(F X Y)≡.  ("≡≡f[x,y]≡" is written with
brackets rather than parentheses to avoid confusion with the parentheses used
in lists.)  This generalizes to such expressions as "≡≡x+y≡" if we consider
"≡≡x+y≡" to be an abbreviation of "≡≡plus[x,y]≡", which may then be written
≡≡(PLUS X Y)≡.  The definite integral of the sine function from %30%* to 1 may be
represented as ≡≡(INTEGRAL SIN (0 1))≡.  The contents of a room may be given by
.skip
.begin nofill; select 3
   ((BED     (TYPE 4-POSTER) (LENGTH 6.3 FEET))
    (TABLE   (COLOR WHITE)   (LEGS 3)    (HEIGHT 30 INCHES))).
.end
The 3-by-3 identity matrix may be represented as ≡≡((1 0 0) (0 1 0) (0 0 1))≡. 
The complex number 2+5i may be written ≡≡(2 5)≡.

	The electrical circuit consisting of resistors %3A%* and %3B%* in parallel,
with the result in series with resistor %3C%*, can be dealt with as
≡≡(SERIES (PARALLEL A B) C)≡.  It is impossible to represent all circuits using
just ≡≡SERIES≡ and ≡≡PARALLEL≡, so one may need to resort to a more general
representation, such as that based on labelling the nodes of the circuit with
numbers.  If we number the node common to all three resistors %32%*, the other end
of %3A%* and %3B%* %31%*, and the other end of %3C%* %33%*, then the circuit may be written
≡≡((1 (A 2) (B 2)) (2 (C 3)))≡, expressing the fact that from node %31%* there is a
resistor %3A%* to %32%* and a resistor %3B%* to %32%*, while from node %32%* there is a resistor %3C%*
to %33%*.

	The relationships between the ⊗⊗dramatis personae⊗ of a play may be
represented as
.skip
.begin nofill; select 3
   ((ROBERT      (WIFE MARY) (HATES (SIR GILES)))
    (MARY        (HUSBAND ROBERT) (LOVES (SIR GILES)))
    ((SIR GILES) (LOVES ROBERT))).
.end
In this example the main list contains three lists, one for each character. 
The first item of each list gives the character's name.  The remaining items
give interesting properties of that character.  Thus ≡≡(HUSBAND ROBERT)≡ in the
entry for Mary is intended to say that Mary's husband is Robert.
	These examples illustrate the considerable versatility of lists in
representing structured data, whether or not that structure is regular or
irregular.

⊗⊗Pretty-printing⊗.  The list in the last example is laid out neatly the better to
exhibit its structure.  Such formatting of a list is called ⊗⊗pretty-printing⊗
The conventions for pretty-printing a list depend on the application.  An
important application is to programs represented as lists.

⊗⊗Expressions as lists⊗.  A particularly important use of lists is in representing
expressions such as "≡≡x+y≡" as LISP data.  This is accomplished in two steps. 
The function is given an explicit name, "≡≡plus≡" in this case, so that we may
rephrase "≡≡x+y≡" as "≡≡plus[x,y]≡".  This in turn determines the actual list
that expresses this, namely ≡≡(PLUS X Y)≡.  Applying this rule to "≡≡x+yz≡"
first requires rephrasing it as "≡≡plus[x,times[y,z]]≡", which then tells us
that the list must be ≡≡(PLUS X (TIMES Y Z))≡.

⊗⊗M-expressions and S-expressions⊗.
Both "≡≡plus[x,y]≡" and "≡≡x+y≡" are called
⊗⊗M-expressions⊗ or meta-expressions.  The corresponding list ≡≡(PLUS X Y)≡ is
called an ⊗⊗S-expression⊗, or symbolic expression.  S-expressions include, but are
not confined to, atoms and lists.  (We consider the other possibilities after
treating the function ≡≡cons≡.) 

	Any atom or list can be an S-expression, and so S-expressions do not
always make sense.  The S-expression ≡≡(1 2 3)≡ is a list but has no other
meaning, unlike the list ≡≡(PLUS 1 2)≡ which denotes ≡≡3≡. 

	 Programming in LISP is done by writing S-expressions, while talking
about the concepts of LISP, as we are doing here, is usually done using
M-expressions, though in some circles S-expressions are used for both purposes. 
M-expressions are essential when discussing functions on lists, where their use
helps to avoid confusing lists themselves with what they denote.  The reader
encountering this distinction for the first time need not attempt to keep track
of its finer points, except possibly in the difficult material on quoting and
denoting.

⊗⊗Programs as S-expressions⊗.  All
LISP programs, or ⊗⊗forms⊗, are represented as
S-expressions.  This use of S-expressions depends on being able to view any
LISP program as either a number, a truth value, a symbol (used as a variable)
or as the application of a function to arguments.  In the first three cases the
program is an atom.  In the fourth it is a list whose first element represents
the function and whose remaining elements represent its arguments; the first
element is said to occupy the ⊗⊗function position⊗ of the form.

	The notion of function is generalized in LISP to encompass not only the
usual mathematical functions but a variety of "pseudo-functions" such as
≡≡(QUOTE X)≡ (a quoting pseudo-function), ≡≡(GO L)≡ (a command to go to label %3L%*),
and ≡≡(SETQ A (PLUS 1 2))≡ (a command to change the value of the symbol %3A%* to
that of the form ≡≡(PLUS 1 2)≡).

	The representation of LISP programs as LISP data makes it possible for
one program to operate on another as mere data.  This of course is what happens
with machine-language programs, at the level of data being represented as
bit-strings, but the convenience of LISP data structures for this purpose
considerably simplifies the implementation of such program-manipulating
programs as interpreters, compilers and optimizers.  LISP's treatment of
symbols obviates the need for the compiler-writer to implement his own symbol
tables, while LISP's functions for manipulating the internal representation of
expressions obviates the need to write special code to deal with parse trees
and related structures.

.next page
.once center 
QUOTING AND DENOTING

	We may say in English either that Fido is my dog or that Fido is the
name of my dog.  Turn-of-the-century editors were more cautious, insisting on
quoting our second use of Fido so that we would need to say that "Fido" is the
name of my dog.  Their position was that the unquoted form denoted the object,
and the quoted form denoted the word, and that one should not be used for the
other.

	This issue arises in any language where we use words to denote
objects, but may also need to refer to the words themselves.  It is not an
issue in ALGOL, which only permits discussion of numbers or truth values. 
However, it is an issue in a symbol-manipulating language such as LISP.  Here
symbols and lists may be used in their capacity as LISP programs to denote
other objects, or may simply be referred to, just as "Fido" may be used as a
word denoting a dog, or may simply be denoted as a word.

≡≡QUOTE≡.  We shall exercise the same caution as the editors of yore, both in
this article and in LISP programs.  Thus we shall insist that ≡≡1+2≡ is %33%* but
that "≡≡1+2≡" is not %33%* but an expression denoting %33%*.  Correspondingly, in a
LISP program we use the list ≡≡(QUOTE x)≡ to denote the object %3x%*.  Thus in LISP
the list ≡≡(PLUS 1 2)≡ denotes %33%* while the list ≡≡(QUOTE (PLUS 1 2))≡ denotes a
list of three atoms, namely ≡≡PLUS≡, %31%*, and %32%*.  The abbreviation ≡≡'x≡ for
≡≡(QUOTE x)≡ is permitted in some implementations, and we shall use it too.  It
is the only abbreviation of LISP programs we shall use.

⊗⊗Eval⊗.  The function ≡≡eval≡ makes explicit the way in which LISP programs
denote, by mapping them to what they denote.  Thus ≡≡eval[(PLUS 1 2)]≡ is %33%*. 
The LISP equivalent of "≡≡eval[(PLUS 1 2)]≡" is ≡≡(EVAL '(PLUS 1 2))≡.  Note the
need to quote the list ≡≡(PLUS 1 2)≡ when it is referred to in a LISP program. 
Had we written ≡≡(EVAL (PLUS 1 2))≡ the argument ≡≡(PLUS 1 2)≡ would have denoted
%33%*, so that our program would have called for the evaluation of the number %33%*
rather than of the list ≡≡(PLUS 1 2)≡.  (As it turns out, ≡≡(EVAL 3)≡ denotes %33%*
in LISP anyway, on the ground that numbers denote themselves.)

	Though ≡≡eval≡ is available to LISP programmers, it should not be
needed in ordinary programming.  It is essential in discussing what LISP
programs mean however, and is also an essential aspect of the implementation of
LISP.

	The following are all ways of saying the same thing: %3e%* denotes %3v%*,
the value of %3e%* is %3v%*, %3e%* has value %3v%*, %3e%* evaluates to %3v%*, and ≡≡eval[e]≡
is %3v%*.  We will use the last two only when %3e%* is a LISP program.

⊗⊗Names⊗.  An expression, whether in our discussion or in LISP, may contain names
of particular things, or of indefinite things.  These names are called
respectively constants and variables.

⊗⊗Constants⊗.  Examples of constants encountered already in discussion are "%33%*",
"%3T%*", "≡≡CAT≡", "≡≡(PLUS 1 2)≡", and "≡≡plus≡", denoting respectively a number, a
truth value, a symbol, a list, and a function.  The LISP equivalents of these
are %33%*, %3T%*, ≡≡'CAT≡, ≡≡'(PLUS 1 2)≡, and ≡≡(FUNCTION PLUS)≡.  (≡≡FUNCTION≡ is
discussed in the section on functionals.)

	The correspondence between constants appearing in the discussion and
constants appearing in LISP programs is: numbers and truth values are the same
in both; other symbols and lists need to be quoted in LISP with ≡≡QUOTE≡ or %3'%*;
and functions named explicitly need to be quoted with ≡≡FUNCTION≡.

⊗⊗Variables⊗.  In our discussion, variables will always be written as single
italic letters such as "%3a%*" and "%3x%*".  Their LISP equivalents are %3A%* and
%3X%*.  LISP variables may be any symbols except %3T%* and ≡≡NIL≡, such as %3X%*,
.TURN ON "α"
≡≡INCOME-TAX≡, %3+%*, ≡≡α%-DISCOUNT≡, and ≡≡YOU!#$α%≡.  Fortunately LISP is blessed
.TURN OFF "α"
with many more variables than the 26 of our discussion language.
	Although we shall encounter expressions such as "≡≡(PLUS x y)≡," these
do not denote definite LISP programs unless we know the values of the
variables, which stand in this example for the two forms that are the arguments
to ≡≡PLUS≡.  Italicized variables are not LISP variables and do not appear in
actual LISP programs.

.TURN ON "∂"
	We shall sometimes use "%3a∂-1↑%*" for "≡≡eval[a]≡".  Thus in describing
≡≡(PLUS a b)≡, where %3a%* and %3b%* are themselves programs, we shall say that
≡≡(PLUS a b)≡ evaluates to ≡≡a∂-1↑+b∂-1↑≡.

⊗⊗Binding⊗.  As a LISP program, a symbol may or may not denote a value; when it
does it is said to be ⊗⊗bound⊗ to that value, otherwise it is said to be ⊗⊗unbound⊗.
Most symbols are initially unbound when a user begins an interactive session
with LISP.  Binding is used to implement both the usual notion of assignment to
a variable and the assignment of argument values to formal parameters in
function definitions, processes that we will consider later.

⊗⊗No type restrictions⊗. Any symbol may be bound to any value.  Symbols are not
restricted to particular types, in contrast to most other languages where
variables are either explicitly declared to be of type integer, real, Boolean,
etc, or are implicitly typed, as in FORTRAN where the first letter of the
variable's name gives the type.  Some LISP implementations permit "hints" to
the implementation (specifically, to the compiler) of the form "You may assume
that %3X%* will only take on integer values."  However these hints should not be
construed as part of the definition of the language proper.

⊗⊗Environments⊗.  An ⊗⊗environment⊗ is a collection of atoms together with their
bindings.  All evaluation in LISP takes place in "the current environment",
a concept that depends on the idea that evaluation is a process, an idea that
we shall discuss later.  The environment can change during evaluation whenever
symbols become bound or rebound as a part of the process of evaluation, an
issue that we address in detail in the section on the LISP processor.

	Some implementations offer representations of environments, and permit
≡≡eval≡ to take an environment as an optional second argument.  The evaluation
is then performed in the given environment rather in than the current
environment.

.next page
.once center  
FUNCTIONS, PREDICATES AND FUNCTIONALS

	Functions play an important role in computation, providing a means of
producing new and interesting objects from old.  Predicates are necessary to
permit the program to make choices based on relevant properties of the data in
hand.  Numerically oriented programming languages offer a wide range of numeric
functions and predicates such as ≡≡plus≡, ≡≡times≡, ≡≡sin≡, ≡≡equal≡, etc.  Since
LISP deals not only with numbers but with symbols and lists, it must offer a
wider variety of functions and predicates.

⊗⊗Functions on numbers⊗.  LISP offers a modest collection of numeric functions and
predicates.  All of the following functions should be familiar to the reader;
we tabulate them here mainly to specify the LISP representation for them, and
to remark on properties of them that are peculiar to LISP.  (Recall that
"%3a%*λ↑" is our abbreviation for "≡≡eval[a]≡.")
.skip
.begin nofill; select 3; turn on "∂"
(PLUS a b)       a∂-1↑+b∂-1↑       (TIMES a b)          a∂-1↑b∂-1↑
(DIFFERENCE a b) a∂-1↑-b∂-1↑       (QUOTIENT a b)       a∂-1↑/b∂-1↑
(MINUS a)        -a∂-1↑        (EXPT a b)           a∂-1↑b∂-1↑
(ADD1 a)         a∂-1↑+1       (REMAINDER a b)      a∂-1↑ mod b∂-1↑   %1(see text below)%*
(SUB1 a)         a∂-1↑-1       (ABS a)              |a∂-1↑|
.end
	LISP supports fixed point (integer) and floating point (real)
arithmetic.  The functions ≡≡fix≡ and ≡≡float≡ may be applied to any numeric
quantity to coerce it to the appropriate type.  The type produced by an
arithmetic function is real when one or more of its arguments is real (this
rule is affectionately called "contagious floating point"), otherwise it is
integer.  This applies even to ≡≡quotient≡, which given two integer arguments
produces the closest integer to the answer whose absolute value does not exceed
that of the answer.  (This definition coincides with current practice in the
computer world but not the mathematical world, which omits "absolute" in that
definition.)  However ≡≡remainder≡ does satisfy the identity ≡≡a = qb+r≡ where %3q%*
is ≡≡quotient[a,b]≡ and %3r%* is ≡≡remainder[a,b]≡.  Thus ≡≡(REMAINDER -1 3)≡
evaluates to ≡≡-1≡.  The mathematical definition would give %32%*. 

	Most implementations permit ≡≡plus≡ and ≡≡times≡ to take any number of
arguments, with the obvious meaning in each case, including ≡≡plus[] = 0≡ and
≡≡times[] = 1≡, the case of no arguments.  Some implementations (e.g. MACLISP)
support multiple precision integers ("bignums"), invoking the appropriate
routines when overflow is detected so that no user intervention is required.

⊗⊗Predicates on numbers⊗. As in any programming language, it is necessary for a
program to be able to test interesting properties of its data.  LISP offers the
following numeric predicates.
.skip
.begin select 3;nofill;turn on "∂";
(EQUAL a b)	 a∂-1↑=b∂-1↑	   (ZEROP a)		a∂-1↑=0
(LESSP a b)	 aλa∂-1↑<b∂-1↑	   (MINUSP a)		a∂-1↑<0
(GREATERP a b)	 a∂-1↑>b∂-1↑	   (ODDP a)		a∂-1↑ %1is odd%*
.end

	There is also a predicate ≡≡(NUMBERP a)≡ which tests whether %3a%*λ↑ (which
obviously need not be numeric) is a number.

⊗⊗Functions on atoms and lists⊗.
Just as one can combine numbers using arithmetic
functions such as ≡≡plus≡ and ≡≡times≡,  so can one combine atoms and lists using
list functions.  

⊗⊗Car and cdr⊗.  The two basic functions for decomposing lists are ≡≡car≡ and ≡≡cdr≡. 
If %3x%* is a non-empty list then  ≡≡car[x]≡ is ≡≡x↓1≡, the first element of %3x%*, and
≡≡cdr[x]≡ is ≡≡(x↓2 ... x↓n)≡, the list of all elements of %3x%* but the first.  Thus
≡≡car[(WE FREE FLEAS)]≡ is ≡≡WE≡ and ≡≡cdr[(WE FREE FLEAS)]≡ is ≡≡(FREE FLEAS)≡. 

⊗⊗Cons⊗.  The basic function for constructing larger lists from smaller is
≡≡cons≡.  If %3x%* is a list then ≡≡cons[a,x]≡ (abbreviated ≡≡a.x≡) is the list whose
≡≡car≡ is %3a%* and whose ≡≡cdr≡ is %3x%*.  Thus ≡≡WE.(FREE FLEAS)≡ is ≡≡(WE FREE FLEAS)≡,
while ≡≡(KING LEAR).(IS A MAN)≡ is ≡≡((KING LEAR) IS A MAN)≡, not
≡≡(KING LEAR IS A MAN)≡.  In general ≡≡a.(b ... z)≡ is ≡≡(a b ... z)≡. 

⊗⊗List⊗.  So far we have thought of ≡≡(CAT DOG)≡ as simply a list of two atoms. 
However, the innocuous parentheses conceal a function for forming lists from
objects, in the sense that ≡≡(CAT DOG)≡ may be considered an abbreviation of
≡≡list[CAT,DOG]≡.  The function ≡≡list≡ simply forms its arguments into a
list.

	In effect this permits us to think of ≡≡(I AM n)≡ as ≡≡list[I, AM, n]≡,
that is, the result of applying ≡≡list≡, (the function represented by the
parentheses of ≡≡(I AM n)≡) to the three objects %3I%*, ≡≡AM≡, and the value of %3n%*. 
The reason we need to identify the function ≡≡list≡ explicitly in this way is
that in representing LISP programs as LISP data, every function must be made
explicit as the first element of a list.  Hence in LISP we should write
≡≡(LIST 'I 'AM N)≡.

	In the special case when all the arguments to ≡≡list≡ are constant, as
in ≡≡(I AM FRED)≡, we may more conveniently write ≡≡'(I AM FRED)≡ instead of
≡≡(LIST 'I 'AM 'FRED)≡.  However, this use of ≡≡QUOTE≡ does not work when one or
more of the arguments are not constants.  (When, and only when, using the
predicate ≡≡eq≡ or the pseudo-functions ≡≡rplaca≡ and ≡≡rplacd≡, discussed later,
this alternative does not even work for constant arguments!)
⊗⊗Append⊗.  The function ≡≡append≡ joins several lists into one long list whose
elements are those of the original lists.  It can be thought of as a variant of
≡≡list≡ which first strips off the outermost pair of parentheses from each of its
arguments before forming a list from them.  The abbreviation "≡≡a@b@c@...@z≡"
for "≡≡append[a,b,c,...,z]≡" is sometimes used.  Thus ≡≡(KING LEAR)@(IS A MAN)≡
is ≡≡(KING LEAR IS A MAN)≡.  ≡≡(HERE IS)@NIL@(THE)@(ONLY (STRONG MAN))@(LEFT)≡ is
≡≡(HERE IS THE ONLY (STRONG MAN) LEFT)≡.   Notice how the empty list ≡≡NIL≡, as
an argument to ≡≡append≡, has no effect on the result.  Also note that if some
list element is itself a list, as is ≡≡(STRONG MAN)≡, then it will still be a
list in the result.  Two degenerate cases are ≡≡append[a] = a≡ and
≡≡append[] = NIL≡.

	In comparison with ≡≡list≡ and ≡≡append≡, ≡≡cons≡ seems an odd fish.  It does a
little of the work of each of ≡≡list≡ and ≡≡append≡.  Consider again
≡≡(KING LEAR).(IS A MAN)≡, which is ≡≡((KING LEAR) IS A MAN)≡.  ≡≡cons≡ appears to
treats its first argument as ≡≡list≡ does and its second as ≡≡append≡ does.  In
fact, ≡≡cons≡ seems quite redundant, since ≡≡x.y≡ can be expressed as ≡≡(x)@y≡. 
The importance of ≡≡cons≡ is that it is an easy-to-implement primitive which can
be used to define both ≡≡list≡ and ≡≡append≡.  Let us first deal with its
implementation, which gives an opportunity to talk about S-expressions at the
same time.

⊗⊗S-expressions that are not lists or atoms⊗.
A little thought will show that
≡≡a.b≡ is not a list when %3b%* is not a list.  One way this may happen is if %3b%* is
an atom and is not ≡≡NIL≡.  Nevertheless, ≡≡a.b≡ is defined, and its ≡≡car≡ is %3a%*
and its ≡≡cdr≡ %3b%*.  There arises the question of how to write the result of such
an operation.  When %3b%* is a non-null atom ≡≡a.b≡ is written in LISP as ≡≡(a . b)≡. 
Hence ≡≡FEAR.NOT≡ is written ≡≡(FEAR . NOT)≡.  When %3b%* is neither an atom nor a
list, but can be constructed from atoms using only ≡≡cons≡, then ≡≡a.b≡ can be
written by writing the first parenthesis of %3b%*, then %3a%* followed by a space,
then the rest of %3b%*.  (This is the same rule one could use even when %3b%* is a
list.)  Thus ≡≡FEAR.(ME . NOT)≡ can be written ≡≡(FEAR ME . NOT)≡.  Alternatively
it can be written ≡≡(FEAR . (ME . NOT))≡ (sometimes called "dotted pair"
notation), but the former is preferred.

	If ≡≡(FEAR . NOT)≡ is neither a list nor an atom, then what is it?  We
need the notion of a data-type where cλ←oλ←nλ←sλ← can be applied indiscriminately.  The
data-type ⊗⊗S-expression⊗ includes all the atoms together with everything one can
get starting with atoms and applying ⊗⊗cons⊗ repeatedly.  (A mathematician would
say that the set of S-expressions is the closure of the set of atoms under
⊗⊗cons⊗, just as the set of natural numbers is the closure of the set {%30%*} under
⊗⊗successor⊗.)  In particular, all atoms and all lists are S-expressions, as we
have already observed; now we know what other things can be S-expressions.

⊗⊗Computer representation of S-expressions⊗.
We have already seen the
representation of S-expressions on paper.  Inside the computer S-expressions
are represented altogether differently.  All S-expressions (with the exception
of small numbers in some implementations) are represented as pointers to
locations in memory.  A number points to a location containing its value. 
A symbol points to a list of the symbol's properties (its "property list",
discussed later).  The result of evaluating ≡≡(CONS A B)≡ is a pointer to a
location containing a pair of pointers; the side-effect of this evaluation is
the construction of such a location.  These three exhaust the traditional
possibilities for S-expressions.

	We now consider the problem of using ≡≡cons≡ to define ≡≡list≡ and
≡≡append≡.  The following equations serve to define ≡≡list≡; we shall treat
the LISP representation of this definition later when we have more artillery.
.skip
.begin nofill; select 3
        () = NIL	 (i.e. list[] = NIL)
        (a b ... z) = a.(b ... z).
.end

	It follows from this definition of ≡≡list≡ that any list may be
constructed by starting out with ≡≡NIL≡ and building up the list from right to
left using one application of ≡≡cons≡ for each list element.  Thus we can
build up ≡≡(a b c)≡ by starting with ≡≡NIL≡, forming ≡≡(c)≡ as ≡≡c.NIL≡,
then forming ≡≡(b c)≡ as ≡≡b.(c)≡, and finally forming ≡≡(a b c)≡ as ≡≡a.(b c)≡.

	The following definition of ≡≡append≡ for the case of two arguments
should not require explanation.
.skip
.begin nofill; select 3
        NIL@b = b
        (x.y)@b = x.(y@b)≡.
.end
	This definition of ≡≡append≡ depends on being able to reduce the problem
of forming ≡≡a@b≡ to the smaller problem of forming ≡≡y@b≡ when y is the ≡≡cdr≡ of
%3a%*, provided %3a%* is not ≡≡NIL≡, a case provided for explicitly.  To extend the
definition to three or more arguments it suffices to have the rule
.skip;begin select 3;nofill;
        a@b@c@d@...@z = a@[b@c@d@...@z].
.end

⊗⊗Reverse, length, subst⊗.  The following three functions are provided in almost
all implementations of LISP.
.skip;begin select 3; nofill; turn on "↓"
reverse[(a↓1 ... a↓n)]       (a↓n ... a↓1). 

length[(a↓1 ... a↓n)]        n.

subst[a,b,c]                 the result of substituting %3a%* for every
                             occurrence of %3b%* in the S-expression %3c%*
                             (including occurrences within nested lists of %3c%*;
                             the original copy of %3c%* is left unchanged)
.end
⊗⊗Cadr etc.⊗.  ≡≡car≡'s and ≡≡cdr≡'s in combination arise so often that their
various compositions are themselves made available as functions.  ≡≡cadr[x]≡ is
≡≡car[cdr[x]]≡, ≡≡cdaadr[x]≡ is ≡≡cdr[car[car[cdr[x]]]]≡, etc.  The general
formation rule should be clear.  A limit of four ≡≡car≡s and ≡≡cdr≡s is generally
imposed, giving a total of 30 such functions counting ≡≡car≡ and ≡≡cdr≡, ≡≡cr[x]≡
not being permitted.

⊗⊗Predicates on lists and atoms.⊗
The basic predicates are ≡≡equal≡, ≡≡atom≡,
≡≡null≡, and ≡≡member≡.  ≡≡equal[a,b]≡, or ≡≡a=b≡, holds just when %3a%* and %3b%* are
the same expression, which defined rigorously means that they are either the
same atom, or satisfy ≡≡equal[car[a],car[b]]≡ and ≡≡equal[cdr[a],cdr[b]]≡. 
≡≡atom[a]≡ holds when %3a%* is an atom.  ≡≡null[a]≡ is equivalent to ≡≡a=NIL≡. 
≡≡member[a,b]≡ holds when %3a%* is an element of the list %3b%*.

⊗⊗Eq⊗  There is also a predicate ≡≡eq≡ which plays two roles.  First it permits
testing equality of atoms, which is a useful primitive if ≡≡equal≡ itself is not
given as a primitive.  Second, it has an implementation-dependent aspect,
namely that ≡≡eq[a,b]≡ holds when %3a%* and %3b%* are not merely the same list but
in fact the same representation of that list.

	It should be clear that ≡≡eq≡ can be thought of as testing pointer
equality.  This makes ≡≡eq≡ very easy to implement.  ≡≡equal≡ can then be defined
in terms of ≡≡eq≡, just as ≡≡append≡ and ≡≡list≡  can be defined in terms of
≡≡cons≡.  This establishes the≡ importance of ≡≡eq≡ as a primitive along with
≡≡cons≡.

	With an understanding of what is actually involved in building a list,
we can now see a distinction between ≡≡'(A B C)≡ and ≡≡(LIST 'A 'B 'C)≡.
Every time ≡≡'(A B C)≡ is evaluated we get not merely the ≡≡equal≡ of what we got
last time we evaluated it, but in fact the ≡≡eq≡, that is, the same pointer to
the same representation of ≡≡(A B C)≡, because evaluation goes no further than
stripping off the ≡≡QUOTE≡ from the list.  However every time we evaluate
≡≡(LIST 'A 'B 'C)≡, although we get the ≡≡equal≡ of what we got last time we
evaluated it, we do not get the ≡≡eq≡.  This can easily make a difference to a
program that uses the ≡≡eq≡ predicate.

	This reveals a limitation of thinking denotationally, as we have been
doing up till now, about objects that have process-related functions and/or
predicates defined on them.  Thus it is important to keep straight which parts
of LISP can be dealt with in a purely denotational way and which require a
notion of process to support them.  Later we shall pursue the process model of
LISP in more detail.  For the moment it suffices to remark that we shall draw a
line between the purely denotational part of LISP and the process-oriented part
by using only LISP notation for process-oriented constructs.  Thus in informal
discussion, although we use "≡≡x+y≡" rather than "≡≡(PLUS X Y)≡" we shall use
"≡≡(GO L)≡" in preference to "≡≡go[l]≡."  This is not a standard convention in
discussing LISP.  More usually, one or the other format "≡≡f[x,y]≡" or "≡≡(F X Y)≡"
is used consistently throughout.

⊗⊗And or not.⊗  The three truth functions are ≡≡not≡, ≡≡and≡, and ≡≡or≡.  We
abbreviate "≡≡not[x]≡" to "≡≡¬x≡", "≡≡and[a,b,c...,z]≡" to "≡≡a∧b∧c∧...∧z≡", and
"≡≡or[a,b,c...,z]≡" to "≡≡a∨b∨c∨...∨z≡".  Not surprisingly, ≡≡and[a] = or[a] = a≡,
≡≡and[] = T≡, and ≡≡or[] = NIL≡.

	It is a peculiarity of LISP that those of its functions that expect
truth values will in fact accept any S-expression, treating ≡≡NIL≡ as ≡≡false≡
and all others as ≡≡true≡.  Though all LISP predicates and ≡≡not≡ return only %3T%*
or ≡≡NIL≡, the functions ≡≡and≡ and ≡≡or≡ (taking arbitrarily many arguments)
return respectively their first null argument or their first non-null argument,
taking the last argument if no earlier one was satisfactory.  (Exceptions are
LISP 1.6 and UCI-LISP, where ≡≡and≡ and ≡≡or≡ always return %3T%* or ≡≡NIL≡.) 
This description omits the case of no arguments, which must be dealt with
explicitly by taking ≡≡and[] = T≡ and ≡≡or[] = NIL≡.  If ≡≡and≡ and ≡≡or≡ are only
given as arguments the truth values %3T%* and ≡≡NIL≡, then all this agrees with
standard usage: ≡≡a∧...∧false∧...∧z = false, true∧...∧true = true,
a∨...∨true∨...∨z = true≡, and ≡≡false∨...∨false = false≡.

	Though ≡≡not≡ only returns %3T%* or ≡≡NIL≡, it too may take arbitrary
values as its argument, returning %3T%* if and only if the argument is ≡≡NIL≡.
The truth function ≡≡not≡ can be seen to be identical in its behavior to the
predicate ≡≡null≡.  Restricted to arguments %3T%* and ≡≡NIL≡, ≡≡not≡ behaves
in the standard way, namely ≡≡¬true = false≡ and ≡≡¬false = true≡.

	An alternative interpretation for ≡≡and≡ and ≡≡or≡ is possible in the
more general case when arbitrary S-expressions are used for arguments.  ≡≡(AND a b)≡
may be thought of as "≡≡a, AND≡ if so then %3b%*, otherwise ≡≡NIL≡," while ≡≡(OR a b)≡
may be read "≡≡a, OR≡ failing that, then %3b%*," where ≡≡NIL≡ is considered
failure.  For example, evaluating ≡≡(OR (F X) (G X)≡ would return the value of
≡≡(F X)≡ without evaluating ≡≡(G X)≡ unless that value was ≡≡NIL≡, in which case
it would return the value of ≡≡(G X)≡, whether ≡≡NIL≡ or not.

.turn on "↓∂"
⊗⊗Conditional expressions⊗.  The value
of the conditional expression
"≡≡[a↓1⊃b↓1, ..., a↓n⊃b↓n]≡" is ≡≡b↓i≡ where %3i%* is the least integer such that
≡≡a↓i≡ holds (i.e. such that ≡≡a↓i≡ is not ≡≡NIL≡).  If there is no such %3i%* then
it is ≡≡NIL≡.  The corresponding LISP form is ≡≡(COND (a*∂-1↓1 b*∂-1↓1) ... (a*∂-1↓n b*∂-1↓n))≡
where the * signifies the corresponding LISP forms of the terms inside the
expression.  Thus ≡≡[x<0⊃-x, T⊃x]≡ = ≡≡|x|≡ for all %3x%*, so that one could write
≡≡(COND ((MINUSP X) (MINUS X)) (T X))≡ as an alternative to ≡≡(ABS X)≡.

	The process of evaluating a ≡≡COND≡ is to evaluate each ≡≡a↓i≡ in turn
until something other than ≡≡NIL≡ turns up, and then to evaluate the
corresponding ≡≡b↓i≡.  This fact is needed to understand how conditional
expressions interact with the process-oriented parts of LISP discussed later.

	The %3n%* components of the conditional expression are sometimes referred
to as clauses.

⊗⊗Lambda expressions⊗.  So far the only functions we have been able to name are
those already supplied by LISP.  We have not been able to define functions of
our own.  In most other programming languages, defining new functions is
accomplished by some construct of the form "define ≡≡f(x↓1,...,x↓n)≡ =
E≡≡(x↓1,...x↓n)≡" where "E≡≡(x↓1,...,x↓n)≡" is some expression referring to the
function's arguments ≡≡x↓1,...,x↓n≡.  Two things have happened here; a function
has been defined, and it has been given a name.  The first of these does not
involve any notion of machine state, as functions have an abstract existence
independent of a computer's memory, just as do numbers.  However, the second
implies that some change of state of a machine has happened, and that in future
any reference to %3f%* may be taken to be a reference to the function now
associated with %3f%*.

	LISP makes this distinction by permitting functions to be defined
without associating them with any particular name.  The technique for doing
this was developed by the logician Alonzo Church in the 1930's, and gave rise
to a subject that came to be known as the ⊗⊗λ-calculus⊗.
	The rationale for the λ-calculus is often represented as follows.  It
is commonplace to refer to ≡≡x2+y3≡ as a function of x and y.  However, the
way we apply functions, as in "≡≡f[5,6]≡," raises the question of which
arguments are to be associated with which variables.  To eliminate this
ambiguity, we shall take "≡≡λxy.x2+y3≡" to denote the function %3f%* such that
≡≡f[5,6]≡ is ≡≡52+63≡.  Had we written "≡≡λyx.x2+y3", ≡≡f[5,6]≡ would have been
≡≡62+53≡.

	The LISP equivalent of "≡≡λa...z.≡E" is ≡≡(LAMBDA (A ... Z) e*)≡ where ≡≡e*≡ is
the LISP equivalent of the expression E.  For example, E could be "≡≡x+y≡",
and the LISP equivalent of "≡≡λxy.x+y≡" would then be ≡≡(LAMBDA (X Y) (PLUS X Y))≡. 
Wherever a function such as ≡≡PLUS≡ or ≡≡LIST≡ may be used in a LISP program, so
may a ≡≡LAMBDA≡-expression.

⊗⊗Free and bound occurrences⊗
We say that the variables ≡≡a≡ through ≡≡z≡ ⊗⊗occur⊗
⊗⊗bound⊗ in "≡≡λa...z.≡E".  All other non-numeric atoms in E not occurring bound in
some subexpression of E are said to ⊗⊗occur free⊗ in "≡≡λa...z.≡E".

	The application of the function ≡≡λa...z.E≡ to arguments ≡≡a'≡
through ≡≡z'≡ yields the value of E in which the free occurrences of
≡≡a,...,z≡ are respectively taken to be ≡≡a',...,z'≡.  Thus ≡≡[λx.x+2][3]≡, the
result of applying ≡≡λx.x+2≡ to %33%*, is ≡≡x+2≡ where %3x%* is taken to be %33%*,
i.e. %35%*.  Hence by the same token the corresponding form ≡≡((LAMBDA (X) (PLUS
X 2) 3)≡ evaluates to %35%*.  We say that the variables ≡≡a≡ through ≡≡z≡ are
⊗⊗lambda bound⊗ to the values ≡≡a'≡ through ≡≡z'≡ respectively during the evaluation
of the ⊗⊗body⊗ E of ≡≡λa...z.≡E when it is applied to ≡≡a'≡ through ≡≡z'≡..

 	A LISP ≡≡LAMBDA≡-expression is not a form, and ≡≡lambda≡ is not a
function.  ≡≡LAMBDA≡-expressions may only be used where they will be interpreted
as functions.  The two possible places for this are in the function position of
a program, and as the argument to ≡≡FUNCTION≡.  In the latter usage, the
≡≡LAMBDA≡-expression may be operated on as a datum of type function.   (Most
implementations treat ≡≡FUNCTION≡ as ≡≡QUOTE≡, so that such a datum will also be
a list whose ≡≡car≡ is ≡≡LAMBDA≡.

⊗⊗A self printing ≡≡LAMBDA≡ expression.⊗
A favorite game computer types play is
to write the smallest program that, when run, prints itself out, no more and no
less.  With languages like LISP and APL that operate by printing out the value
of their input, this is unfairly easy.  The LISP or APL program %31%* will,
when typed in, result in the print-out of itself.  This is not possible in most
other languages because their programs must be commands rather than expressions
denoting values.  Under these circumstances the usual technique is to write the
program in two almost identical parts, one of which is the actual program, the
other of which is typically a string designed to look like the first part.  The
program prints two copies of the string.  A little care is usually needed to
get the start and end right.

	This technique can be exemplified very easily in LISP by applying
the denotation of the expression "≡≡λx.(x (QUOTE x))≡" to the LISP equivalent
of that expression.  In LISP this would be:
.skip;begin nofill; select 3;
        ((LAMBDA (X) (LIST X (LIST (QUOTE QUOTE) X)))
         (QUOTE (LAMBDA (X) (LIST X (LIST (QUOTE QUOTE) X)))))
.end
	The first line is the ≡≡LAMBDA≡-expression.  The second is its argument,
the same ≡≡LAMBDA≡-expression but quoted so that it will not be evaluated.  In
almost any programming language, the most likely solution one will encounter to
this problem will be the above tailored to the facilities available.  The
brevity of the LISP solution speaks well for LISP.
.skip;begin flush right;
"To iterate is human,
To recurse divine."
.end

⊗⊗Recursive LAMBDA⊗.  It is possible for a function defined using ≡≡lambda≡ to refer
to itself, thus permitting recursive definitions, with the help of the ≡≡label≡
construct.  Like ≡≡lambda≡, ≡≡label≡ is not a function but rather a way of
constructing functions.  The expression "≡≡label[f,≡E]", where "%3f%*" is a variable
and E is a lambda-expression, can be considered to be just E but with any
occurrences of "%3f%*" inside E referring to E itself.  Thus all ≡≡label≡ling
accomplishes is to give a name to the whole lambda-expression that can be used
within the lambda-expression, though nowhere else.

	The LISP equivalent of "≡≡label[f,≡E]" is ≡≡(LABEL F e)≡ where %3e%* is
the LISP equivalent of the expression E.  Following a convention of
mathematics, we shall abbreviate  ≡≡"label[f,≡E]" to "≡≡αf.≡E."  The "%3α%*" (Greek
mu) stands for minimization.  For technical reasons discussed later, "≡≡αf.≡E"
can be thought of as "the minimal %3f%* equal to E."

	There is a fairly obvious way of coping with the ≡≡label≡ construct
when evaluating ≡≡(LABEL f e)≡, namely to replace it by %3e%*, but at the same
time replacing any occurrences of %3f%* in %3e%* by ≡≡(LABEL f e)≡.  This replacement
we may call ⊗⊗label-expression⊗.  In turn those occurrences of ≡≡(LABEL f e)≡ may
themselves be expanded in this way as far as is necessary to permit computation
to proceed.  Generally such occurrences are confined to a clause of a
conditional expression, so that eventually the conditional will not require the
evaluation of that clause.

	Though this account of ≡≡label≡ is the "official" one, it is much
simpler to think of ≡≡(LABEL f (LAMBDA (a ... z) e))≡ as being
≡≡(DEFUN f (a ... z) e)≡, meaning that the FUNction %3f%* is being DEFined to be
%3e%*, which may refer recursively to %3f%*.

	As a simple example, we may multiply the numbers 2 and 3 using only
the arithmetic function ≡≡PLUS≡ and the predicate ≡≡ZEROP≡, with
.skip;begin nofill;select 3
((LABEL M (LAMBDA (X Y)
           (COND ((ZEROP X) 0)
                 (T (PLUS Y (M (PLUS X -1) Y))))))
 2 3)
.end
	This recursive definition of %3M%* as multiplication relies on the two
facts ≡≡0y = 0≡ and ≡≡xy = y+[x-1]y≡.  Note that %3M%* is ⊗⊗local⊗ to this expression,
just as are the lambda-variables %3X%* and %3Y%*; the use of this expression has no
bearing on the meaning of %3M%* in other expressions.

	The corresponding ≡≡DEFUN≡ form would be:
.example
(DEFUN M (X Y)
      (COND ((ZEROP X) 0)
            (T (PLUS Y (M (PLUS X -1) Y)))))≡
.end
while its application to %32%* and %33%* is simply ≡≡(M 2 3)≡.  The only critical
difference here is the notion that ≡≡DEFUN≡ has "permanently" defined %3M%* to
be multiplication, allowing us to refer to %3M%* "subsequently" as though it were
≡≡TIMES≡.  The ≡≡LABEL≡ definition does not have the corresponding concept of
time; it is a timeless definition, or at least can be made so with
sufficiently careful wording in our description of it.

	To see how this works in practice, let us follow in detail the
evaluation of the ≡≡LABEL≡ definition.  Text between ";" and the end of the
line is ignored, a commenting convention adopted in some implementations.
.example
((LAMBDA (X Y)                 ; ≡label-expansion≡≡
  (COND ((ZEROP X) 0)
        (T (PLUS Y ((LABEL M (LAMBDA (X Y)
                              (COND ((ZEROP X) 0)
                                    (T (PLUS Y (M (PLUS X -1) Y))))))
                     (PLUS X -1) Y)))))
 2 3)

(COND ((ZEROP 2) 0)            ; LAMBDA≡-bind %3X%* to %32%*, %3Y%* to ≡≡3
      (T (PLUS 3 ((LABEL M (LAMBDA (X Y)
                            (COND ((ZEROP X) 0)
                                  (T (PLUS Y (M (PLUS X -1) Y))))))
                   (PLUS 2 -1) 3))))

(PLUS 3 ((LABEL M (LAMBDA (X Y)    ; ≡obvious evaluations≡≡
                   (COND ((ZEROP X) 0)
                         (T (PLUS Y (M (PLUS X -1) Y))))))
          1 3))

(PLUS 3 (PLUS 3 ((LABEL M (LAMBDA (X Y)   ; ≡three more such steps≡≡
                           (COND ((ZEROP X) 0)
                                 (T (PLUS Y (M (PLUS X -1) Y))))))
                  0 3)))

(PLUS 3 (PLUS 3 (COND ((ZEROP 0) 0)    ; ≡two more such steps≡≡
                      (T (PLUS 3 ((LABEL M (LAMBDA (X Y)
                                            (COND ((ZEROP X) 0)
                                                  (T (PLUS Y (M (PLUS X -1) Y))))))
                                   (PLUS 0 -1) 3))))))

(PLUS 3 (PLUS 3 0))   ; and eventually we get 6 as we ought.
.end
	We represented the lambda-binding as though the values bound to the
variables were physically substituted for the variables.  Althogh this in fact
gives one way to implement lambda-binding, sometimes called the ⊗⊗copy-rule⊗,
practical implementations achieve the effect by keeping the environment
independent of the program, updating the environment when lambda-binding is
called for, and consulting the environment when the value of an occurrence of a
variable is actually needed.  Thus although the last step of the above
computation discards material that appeared to take much work to compute, namely
everything in the second branch of the ≡≡COND≡, in fact a close examination of
what an actual implementation would do to generate this large expression would
show that nλ←oλ← work at all was done on it!  This illustrates an efficiency
resulting from the use of environments.  

	A similar example permits us to append the lists ≡≡(1 2 3)≡ and ≡≡(4 5 6)≡
using only the functions ≡≡cons≡, ≡≡car≡, and ≡≡cdr≡, and the predicate ≡≡null≡.
.example
((LABEL A (LAMBDA (X Y)
           (COND ((NULL X) Y)
                 (T (CONS (CAR X) (A (CDR X) Y))))))
 '(1 2 3) '(4 5 6))
.end
	Here %3A%* plays the role of ≡≡append≡, just as %3M%* played the role of
≡≡times≡ in the previous example.  The relevant properties of ≡≡append≡ we
have used here are ≡≡NIL@y = y≡ and ≡≡x@y = x↓1.[cdr[x]@y]≡ for non-null %3x%*,
which is the same as saying ≡≡[a.b]@y = a.[b@y]≡ when ≡≡x = a.b≡.

	A favorite example is the factorial function.
.example
(LABEL FACT (LAMBDA (X)
             (COND ((ZEROP X) 1)
                   (T (TIMES X (FACT (SUB1 X)))))))
.end
	This uses the two facts ≡≡0! = 1≡ and ≡≡n! = n(n-1)!≡ for positive
%3n%*.  For negative %3n%* this function as represented above is undefined, and
attempting to evaluate it with the methods we used to evaluate the
multiplication example will lead to label-expansion never terminating. 
In practice, storage is consumed during the recursion, and so the process will
not continue very long before the available storage is exhausted.

	(The following is mainly for those interested in the mathematics of
≡≡LABEL≡, and assumes some familiarity with the concepts we refer to.)  The
counterpart of ≡≡(LABEL F e)≡ is "≡≡αf.≡E", and is read as "the least %3f%* equal to
E."  "Least" in this context means "the partial function defined on the fewest
arguments," although more generally than LISP permits, E may be any
mathematical object, not merely a function.  The reason "least" is specified is
not because there is any intrinsic merit in being undefined but because, of all
the possible %3f%*'s that might be equal to E, the least happens to be the one
defined by the fairly obvious procedure we called label-expansion above.  (For
example, consider ≡≡αf.λx.[x=0⊃1,xf[x-1]]≡.  This can be seen to be a way of
writing the function ≡≡factorial≡ - in fact the same way we used immediately
above - if we assume that ≡≡factorial≡ is undefined on negative integers. 
However, ≡≡factorial≡ is not the only solution in %3f%* for ≡≡f = λx.[x=0⊃1,xf[x-1]]≡. 
The function which is %30%* for negative numbers but agrees elsewhere with
factorial is also a solution, and is better defined than ≡≡factorial≡.)  For
those familiar with the least-fixed-point operator %3Y%*, "%3α%*" may be considered
merely a convenient syntactic abbreviation of "≡≡Yλ≡".

≡≡FUNCTION≡.  We have already mentioned the use of ≡≡FUNCTION≡ in LISP when
denoting for example the function ≡≡plus≡, which is accomplished with
≡≡(FUNCTION PLUS)≡.  The same applies to functions constructed using ≡≡LAMBDA≡. 
Thus although we may write ≡≡((LAMBDA (X) (PLUS X 2)) 1)≡ to denote %33%*, we must
write ≡≡(APPLY (FUNCTION (LAMBDA (X) (PLUS X 2))) '(1))≡ to denote the same
thing using ≡≡apply≡ explicitly.  

⊗⊗Functionals⊗.  A functional is a function that takes as one or more of its
arguments a function.  The two functionals in general use are ≡≡apply≡ and
≡≡map≡, the latter really being a family of six related functionals.

⊗⊗Apply⊗.  "≡≡apply[f,(a ... z)]≡" may be considered to be the expression for which
"≡≡f[a,...,z]≡" is the abbreviation.  Thus the LISP programs ≡≡(PLUS A B)≡
and ≡≡(APPLY (FUNCTION PLUS) (LIST A B))≡ will evaluate to the same thing.  This
need not stop there, of course; we may consider that it in turn is the
abbreviation for "≡≡apply[apply,(f (a ... z))]≡" and so on.  This seems to
illustrate that ≡≡apply≡ is merely a formality with no application in practical
programming.  Nevertheless it can be of practical use in ordinary programming;
we discuss such uses for ≡≡apply≡ in the last part of the EXAMPLES section.

⊗⊗Maps⊗.  LISP offers a way of applying a function not to a whole list but rather
to the individual elements of the list, using the functional ≡≡mapcar≡. 
For example, ≡≡mapcar[minus,(1 2 3)]≡ is the list ≡≡(-1 -2 -3)≡.  The LISP
equivalent is ≡≡(MAPCAR (FUNCTION MINUS) '(1 2 3))≡.  ≡≡mapcar≡ takes as its
first argument a function (of %3n%* arguments) and as its remaining %3n%* arguments
(the same number %3n%*) %3n%* lists of values to apply the function to, returning the
results in the form of a list.  (In some implementations %3n%* is restricted to 1
and the two arguments are given in the other order.)  Thus
≡≡mapcar[plus,(1 2 3),(4 5 6)]≡ is the list ≡≡(5 7 9)≡, just as though the lists
were being added together as vectors.  In full generality,

.once center
≡≡mapcar[f,(x↓1↓1 ... x↓1↓n), ...,  (x↓m↓1 ... x↓m↓n)]≡

would be the list

.once center
≡≡(f[x↓1↓1,...,x↓m↓1]  ...  f[x↓1↓n,...,x↓m↓n])≡.

(Readers familiar with matrix algebra may visualize the %3m%* lists as the %3m%* rows
of an %3m%*x%3n%* matrix.  Then %3f%* is applied to each of the %3n%* columns to yield a
list of %3n%* items.)

⊗⊗Other maps⊗.  One possible variation on ≡≡mapcar≡, called ≡≡maplist≡, is to apply
≡≡mapcar≡'s first argument not to the list elements but to the list "tails",
that is, to the list itself, then its ≡≡cdr≡, then the ≡≡cdr≡ of that, and so on
up to but excluding the empty list.  Thus ≡≡maplist[f, (1 2 3)]≡ is
≡≡(f[(1 2 3)] f[(2 3)] f[(3)])≡ rather than (f[1] f[2] f[3]).

	Another variation, ≡≡mapcan≡ (or ≡≡mapconc≡ in INTERLISP) has the effect
of ≡≡mapcar≡ followed by an application of ≡≡append≡ to the result.  That is,
≡≡mapcan[f, (1 2 3)]≡ is ≡≡f[1]@f[2]@f[3]≡, which is also
≡≡apply[append,mapcar[f,(1 2 3)]]≡.  ≡≡mapcan≡ is useful when each application of
%3f%* is to produce, not a single value but a list of values.  As a special case
of this, %3f%* may need to produce either no values or one value.
.begin indent 40,40; nofill
"To iterate is illiterate,
To recurse is worse.
To avoid this trap, see
Instructions for ≡≡mapc≡."
.end
	A third variation, ≡≡mapc≡, simply discards the result (or more
accurately, does not form the result in the first place), returning ≡≡NIL≡
instead (or in some implementations the value of its second argument).  It is
used only for its effect, a concept depending on the process model of LISP that
we discuss later.

	The function ≡≡mapcon≡ combines the ≡≡mapcan≡ variation (appends results)
with the ≡≡maplist≡ variation (operates on tails).  The function ≡≡map≡ combines
the ≡≡mapc≡ variation (returns ≡≡NIL≡) with the ≡≡maplist≡ variation.

	In summary, these six functions are
.example
                Elements	Tails
list:           mapcar  	maplist
append:         mapcan  	mapcon
NIL:            mapc    	map
.end
.once center
EXAMPLE LISP PROGRAMS

⊗⊗Example 1. Differentiation of formulae.⊗

	We shall give a program to differentiate formulae containing the
rational algebraic operations ≡≡PLUS, DIFFERENCE, TIMES, ≡and≡≡ QUOTIENT.≡  The
relevant properties of ≡≡D↓xe≡, the derivative of the expression %3e%* with respect
to x, are:
.nofiil
≡≡D↓xn = 0≡ for any constant n
≡≡D↓xx = 1≡
≡≡D↓xy = 0≡ for variables %3y%* other than %3x%*
≡≡D↓x[e+f] = D↓xe + D↓xf≡
≡≡D↓x[e-f] = D↓xe - D↓xf≡
≡≡D↓x[ef]  = [D↓xe]f + eD↓xf≡
≡≡D↓x[e/f] = [D↓xe]/f - e[D↓xf]/f2≡
.fill

	We shall assume that no algebraic operation has more than two
arguments.
.example
(DEFUN DERIV (EXP)
  (COND ((NUMBERP EXP) 0)
        ((EQUAL EXP 'X) 1)
        ((ATOM EXP) 0)
        ((EQUAL (CAR EXP) 'PLUS)
         (LIST 'PLUS (DERIV (CAR X)) (DERIV (CADR X))))
        ((EQUAL (CAR EXP) 'DIFFERENCE)
         (LIST 'DIFFERENCE (DERIV (CAR X)) (DERIV (CADR X))))
        ((EQUAL (CAR EXP) 'TIMES)
         (LIST 'PLUS
               (LIST 'TIMES (DERIV (CAR X)) (CADR X))
               (LIST 'TIMES (CAR X) (DERIV (CADR X)))))
        ((EQUAL (CAR EXP) 'QUOTIENT)
         (LIST 'DIFFERENCE
               (LIST 'QUOTIENT (DERIV (CAR X)) (CADR X))
               (LIST 'QUOTIENT
                     (LIST 'TIMES (CAR X) (DERIV (CADR X)))
                     (LIST 'TIMES (CADR X) (CADR X)))))))≡
.end
	It is routine to verify that each clause of the ≡≡COND≡ deals with one
of the above identities for differentiation.  The function as defined attempts
no simplification of the result.  Hence evaluating ≡≡(DERIV (TIMES X X))≡ will
yield ≡≡(PLUS (TIMES 1 X) (TIMES X 1))≡ rather than the simpler ≡≡(TIMES 2 X)≡.  
Example 2. Implementing a fragment of APL with a fragment of LISP⊗

	This example illustrates some aspects of LISP that are not in wide use. 

	First, we shall implement a fragment of APL using M-expressions rather
than S-expressions, to illustrate that the former may be applied not merely to
naming LISP objects in discussion, but to completely specifying a LISP program. 
While this may have been "obvious" in principle, it is more convincing to see
it carried through in practice.

	Second, we shall show how the functionals ≡≡apply≡ and ≡≡mapcar≡, in
conjunction with ≡≡lambda≡ and ≡≡label≡, permit a programming style reminiscent
of a style popular among many APL users.  With a suitable choice of notation
for these two functionals we may capture even more of the APL flavor, without
doing any violence to LISP beyond the superficialities of notation.  This
should permit a deeper comparison of the differences and similarities between
LISP and APL, by removing some of the more superficial differences of style and
notation.

	We shall exhibit some examples of this style, then discuss ways of
using these operations to conveniently implement various APL operations.  We
shall give the definitions associated with this example in the metanotation we
have been using.

	The abbreviations we shall use in this section are not generally
accepted abbreviations for LISP constructs.  They are: ≡≡x↓1≡ for ≡≡car[x]≡, %3x%*λ}
for ≡≡cdr[x]≡, ≡≡f{x}≡ for ≡≡apply[f,x]≡, and ≡≡f<a,...,z>≡ for
≡≡mapcar[f,a,...,z]≡.  We shall also drop the brackets used in
≡≡[a↓1⊃b↓1,...,a↓n⊃b↓n]≡ when no ambiguity will arise.  For clarity we shall
sometimes spell out the test in the final clause of a ≡≡cond≡ instead of relying
on the more efficient and succinct use of ≡≡T≡ as a catch-all final test. 
Finally, we shall express ≡≡label[f,λxy.e]≡ in the more familiar style,
"≡≡f[x,y]: e≡".

	An example of this style is provided by the definition of ≡≡append≡:
.example
append[x,y]:  x=NIL ⊃ y,
              x≠NIL ⊃ x↓1.append[xλ},y].
.end
.turn off "{}"
	Now we consider a programming style appropriate for ≡≡apply≡ and
≡≡mapcar≡.  So far we have thought of ≡≡apply≡ as merely supplying an
alternative, ≡≡apply[f,(a,...,z)]≡, to ≡≡f[a,...,z]≡.  However, when %3f%*'s
arguments come already formed as a list whose elements cannot be individually
written down at the time of writing the program, ≡≡apply≡ can be quite useful. 
For example, if %3x%* is a list of numbers then ≡≡apply[plus,x]≡, or ≡≡plus{x}≡,
the abbreviation promised above, is the sum of those numbers, while ≡≡times{x}≡
is the product.  Some implementations offer the functions ≡≡min≡ and ≡≡max≡,
which take arbitrarily many arguments and return respectively the smallest and
largest.  Then ≡≡min{x}≡ is the smallest element of %3x%* while ≡≡max{x}≡ is the
largest.  ≡≡list{x}≡ is simply %3x%* while ≡≡append{x}≡ assumes that %3x%* is a list of
lists and in effect removes the outermost pair of parentheses from each element
of %3x%*.  In some implementations ≡≡lessp[a,b,c,d,...,z]≡ is permitted, meaning
≡≡a<b<c<d<...<z≡.  Then ≡≡lessp{x}≡ holds just when %3x%* is a list of distinct
numbers arranged in increasing order.  Similar remarks apply to ≡≡greaterp≡ for
decreasing order.  ≡≡and{x}≡ holds just when every element of %3x%* is ⊗⊗true⊗
(non-null) while ≡≡or{x}≡ holds just when some element of %3x%* is ⊗⊗true⊗. (We may
think of "≡≡and{ }≡" as "for all" and "≡≡or{ }≡" as "there exists.")

	The use of ≡≡f{x}≡ for ≡≡apply[f,x]≡ emphasizes that %3f%* is being used in
its normal role as a function, namely as ≡≡f[x↓1,...,x↓n]≡.  By the same
token the use of ≡≡f<a,...,z>≡ for ≡≡mapcar[f,a,...,z]≡ emphasizes that %3f%*
is being applied to something, in this case a "cross-section" of the lists %3a%*
through %3z%*.  Thus ≡≡f<g<x>>≡ can be readily seen to be ≡≡(f[g[x↓1]] ... f[g[x↓n]])≡.
.turn on "{}"
	Combining ≡≡mapcar≡ with ≡≡apply≡ can lead to some powerful
constructs.  For example, ≡≡and{minusp<x>}≡ holds when every element of the list
%3x%* is negative, while ≡≡or{minusp<x>}≡ holds when some element of %3x%* is
negative.  Using ≡≡\≡ we may take any predicate %3p%* and "package"
it as its two list versions, ≡≡\x.and{p<x>}≡ and ≡≡\x.or{p<x>}≡, which
respectively test whether the argument contains ⊗⊗only⊗ elements satisfying %3p%* or
⊗⊗any⊗ elements satisfying %3p%*.

	The dot product of two vectors %3x%* and %3y%* represented as lists of
numbers is ≡≡plus{times<x,y>}≡.  The transpose of a matrix %3m%* represented as a
list of lists is ≡≡mapcar{list.m}≡, or simply ≡≡list<{m}>≡ if we adopt the
obvious abbreviation "≡≡f<{x}>≡" for "≡≡mapcar{f.x}≡."  Then the product of two
such matrices %3m%* and %3n%* is ≡≡[\i.[\j.plus{times<i,j>}]<list<{n}>>]<m>≡,
if we take ≡≡plus≡ to be the exterior operator and ≡≡times≡ the interior,
which is the ordinary kind of matrix multiplication.

	This generalizes to any other matrix product with given exterior and
interior operators.  For example, using ≡≡min≡ as the exterior operator and
≡≡plus≡ as the interior, we may readily compute the ≡≡min/plus≡ matrix product,
an operation associated with finding least-cost routes in networks.  If we use
instead respectively ≡≡or≡ and ≡≡and≡ then we may compute the product of
Boolean matrices, an operation associated with detecting the existence of
routes in networks.

⊗⊗APL⊗  Now let us consider how LISP may be used to implement aspects of APL in a
reasonably straightforward way.  (The reader is referred to the APL article
elsewhere in this encyclopedia for a detailed description of the APL
programming language.)  We shall represent scalars as LISP numbers, vectors as
lists of numbers, and higher dimensional objects as lists of lists. Thus the
APL vector ≡≡1 2 3≡ is the list ≡≡(1 2 3)≡ while the %33%*x%33%* identity matrix is
≡≡((1 0 0) (0 1 0) (0 0 1))≡, a list of the three rows of the matrix, in the
usual order.

	We have in effect already shown how to implement APL's ≡≡reduce≡
functional; it is simply LISP's ≡≡apply≡.  Thus we may write +/⊗⊗X⊗ as ≡≡plus{x}≡. 
Actually ≡≡apply≡ will permit a wider range of functions to be applied than will
≡≡reduce≡.  Note that APL cannot readily achieve the effect of
≡≡append{f<x>}≡ because of the APL ban on "ragged-edge" arrays;  even though
≡≡append{f<x>}≡ itself may be representable in APL, the intermediate ≡≡f<x>≡
may not.

	To implement the APL ≡≡compression≡ operation, which forms a vector of
just those elements of %3b%* with corresponding %31%*'s in %3a%*, we may use
≡≡append{and<[\e.e=1]<a>,list<b>>}≡.  The effect of ≡≡list<b>≡ is to enclose each
element of %3b%* in parentheses that ≡≡append≡ will simply remove later.  The use
of ≡≡and≡ is: if ≡≡a↓i≡ (%3e%* in the lambda-expression) is equal to %31%* then
≡≡(b↓i)≡ will be selected by the ≡≡and≡, otherwise ≡≡NIL≡ will be.  Finally the
lists, each having %31%* or %30%* elements depending on whether or not the
corresponding element was selected, are ≡≡append≡ed together to form
the compressed list.

	Below we tabulate 15 APL operations defined in the metanotation for
LISP.  The reader is again referred to the APL article in this Encyclopedia for
their non-LISP definitions.  Only a few of our definitions attempt to handle
the full applicability of these functions to both scalars and vectors.
.begin nofill; group;
≡≡reduce[a,b]:     a{b}.

compress[a,b]:   append{and<[\i.i=1]<a>,list<b>>}.

expand[a,b]:     a=NIL ⊃ NIL,
                 a↓1=1  ⊃ b↓1.expand[aλ},bλ}],
                 a↓1=λ/1  ⊃ NIL.expand[aλ},b].

iota[n]:         n=0 ⊃ NIL,
                 n>0 ⊃ iota[n-1]@(n).

index[n,v]:      n=1 ⊃ v↓1,
                 n>1 ⊃ index[n-1,vλ}].

size[v]:         atom[v]  ⊃ NIL,
                 ¬atom[v] ⊃ length[v].size[v↓1].

ravel[v]:        atom[v]  ⊃ (v),
                 ¬atom[v] ⊃ append{ravel<v>}.

take[n,v]:       n<0 ⊃ drop[n+length[v],v],
                 n=0 ⊃ v,
                 n>0 ⊃ v↓1.take[n-1,vλ}].

drop[n,v]:       n<0 ⊃ take[n+length[v],v],
                 n=0 ⊃ v,
                 n>0 ⊃ drop[n-1,v].

reversal[v]:     reverse[v].

rotation[n,v]:   n<0 ⊃ take[n,v]@drop[n,v].
                 n≥0 ⊃ drop[n,v]@take[n,v],

indexof[v,s]:    ¬atom[s] ⊃ [\i.indexof[v,i]]<s>,
                 null[v]  ⊃ 1,
                 v↓1=s    ⊃ 1,
                 T        ⊃ 1+indexof[vλ},s].

membership[a,b]: atom[a]   ⊃ [member[a,b]⊃1, T⊃0],
                 ¬atom[a]  ⊃ [\i.membership[i,b]]<a>.

  ≡(The following is used only by decode and encode.  ≡≡rad[(3 4 5)]≡ is ≡≡(20 5 1)≡)≡≡
rad[r]:          maplist[\x.times{x},rλ}@(1)].

decode[r,a]:     plus{times<rad[r],a>}.

encode[r,a]:     [\xy.[[a/x] mod y]]<rad[r],r>.
  ≡(This discards any "overflow".)
.end

	To emphasize again that the only two departures in the above example
from normal practice in LISP programming are the use of metanotation and the
considerable exercising of ≡≡apply≡ and ≡≡mapcar≡, let us repeat the
definitions of ≡≡membership≡ and ≡≡rad≡ in the standard LISP notation.
.example
(DEFUN MEMBERSHIP (A B)
       (COND ((ATOM A) (COND ((MEMBER A B) 1) (T 0)))
             (T (MAPCAR (FUNCTION (LAMBDA (I) (MEMBERSHIP I B))) A))))

(DEFUN RAD (R)
       (MAPLIST (FUNCTION (LAMBDA (X) (APPLY (FUNCTION TIMES) X)))
                (APPEND (CDR R) (LIST 1))))≡
.end
.next page
.once center
THE LISP PROCESSOR

	So far we have been able to describe LISP in terms of data and
functions.  This has made it possible for us to treat LISP more as a discipline
of mathematics than of programming.  This is sometimes referred to as the
"object-oriented" point of view, from which both LISP and APL are attractive
languages particularly when compared with the alternatives.  What distinguishes
programming from mathematics are the notions of ⊗⊗process⊗,⊗⊗event⊗, and ⊗⊗change of
state⊗.  For example, we have thus far considered ≡≡THE.FOX≡ just to be an
object, not the process of constructing the pair ≡≡(THE . FOX)≡.  We have
considered ≡≡1+2≡ to be simply %33%*, not the process of adding %31%* to %32%* to get %33%*. 
We now extend LISP to take the notion of process into account.

	Concretely, the notion of process has two aspects in LISP.  The first
is the concept of ⊗⊗assignment⊗, the second ⊗⊗flow of control⊗.  Assignment deals
with several ways in which the ⊗⊗state⊗ of the processor can be changed.  Flow of
control deals with determining which events happen and in what order.

⊗⊗The LISP processor⊗.  It is convenient to think of the evaluation of LISP
programs as taking place on a computer whose machine languge is LISP.  We shall
call this computer the LISP processor, or simply the processor.  The processor
has a ⊗⊗state⊗ consisting of an ⊗⊗environment⊗ (bindings of bound symbols), the
⊗⊗properties⊗ of all atoms, and a ⊗⊗memory⊗ in which all list structures and arrays
are represented.  (Like any processor, a working model of such a thing will
also have many other aspects of its state, but these will all be invisible to
the user.  The idealized or abstract processor is just the visible part of the
whole machine.)

	The objective of the LISP processor is to evaluate a given
S-expression.  That is, the processor constitutes a definition of ⊗⊗eval⊗, no
longer as a function but as a process.  The processor will go through a
sequence of states as it works towards the answer.  The value of any given
S-expression may vary from one state to the next.

⊗⊗Assignment⊗.  A number of operations exist to bring about state changes in every
aspect of the state.  Associated with the environment is ≡≡SET≡ (together with
its very convenient variant ≡≡SETQ≡).  Associated with properties is
≡≡PUTPROP≡ (with the convenient variant ≡≡DEFPROP≡ in some implementations).
Associated with the state of ≡≡cons≡ cells are ≡≡RPLACA≡ and ≡≡RPLACD≡. 
Associated with arrays is ≡≡STORE≡.

≡≡SET≡ and ≡≡SETQ≡.  Evaluation of ≡≡(SET a b)≡ results in the binding of %3a%*λ↑
becoming %3b%*λ↑.  Thus evaluating ≡≡(SET 'X 3)≡ will bind the symbol %3X%* to
%33%*.  The value of ≡≡(SET a b)≡ is %3b%*λ↑.  Because ≡≡SET≡ is almost
invariably used as ≡≡(SET 'a b)≡, its equivalent ≡≡(SETQ a b)≡ is also available.

≡≡RPLACA≡ and ≡≡RPLACD≡.  We have already mentioned the predicate ≡≡eq≡, which
tests whether its two arguments do not merely represent the same object but in
fact are the same representation in that they are stored at the same location. 
The processor performs this test by comparing the pointers to the respective
locations for equality, which in most computers can be done with a single
instruction.  When the object pointed to is a cell, containing a ≡≡car≡ and a
≡≡cdr≡, another similarly convenient operation is that of storing a value in
either the ≡≡car≡ or the ≡≡cdr≡ part of the cell.  The effect is that future
accesses of the ≡≡car≡ or ≡≡cdr≡ of that cell will yield the new value stored
there.  The two operations for achieving this are ≡≡RPLACA≡ and ≡≡RPLACD≡
respectively.  The side effect of evaluating ≡≡(RPLACA c v)≡ is to store ≡≡vλ↑≡ in
the ≡≡car≡ half of the cell ≡≡cλ↑≡.  In addition it returns as its value %3c%*λ↑. 
≡≡RPLACD≡ does the same thing with ≡≡cdr≡ instead of ≡≡car≡.

≡≡PUTPROP≡.  Atoms may have properties.  Evaluating ≡≡(PUTPROP a b c)≡ assigns
atom %3a%*λ↑ its %3c%*λ↑ property, namely %3b%*λ↑.  Thus ≡≡(PUTPROP 'JOHN 'FRED 'FATHER)≡
assigns ≡≡JOHN≡ the ≡≡FATHER≡ property ≡≡FRED≡.  The %3c%* part is called the
2⊗indicator⊗.  The value of ≡≡(GET a b)≡ is then the property of %3a%*λ↑ having
indicator %3b%*λ↑.  Thus ≡≡(GET 'JOHN 'FATHER)≡ would yield ≡≡FRED≡ if evaluated
after evaluating the preceding ≡≡PUTPROP≡ construct.

	The form ≡≡(PUTPROP 'a 'b 'c)≡ arises sufficiently frequently as to
warrant, in some implementations, its equivalent ≡≡(DEFPROP a b c)≡.

⊗⊗Use of properties⊗.  In some implementations the value, printname (or pname) and
functional meaning of an atom are all stored on its property list under the
appropriate indicators, respectively ≡≡VALUE≡, ≡≡PNAME≡, and one of ≡≡EXPR≡,
≡≡FEXPR≡, ≡≡MACRO≡ or several other possible functional indicators. 
The effect of ≡≡(DEFUN f (a ... z) e)≡ is that of
≡≡(DEFPROP f (LAMBDA (a ... z) e) EXPR)≡.  One of the earliest large LISP
programs, B. Raphael's SIR (Semantic Information Retrieval) program, maintained
most of its knowledge of the world as properties of atoms.

	A particularly important use is in the permanent definition of
functions.  If for example ≡≡(DEFPROP ADD2 (LAMBDA (X) (PLUS X 2)) EXPR)≡ is
evaluated, the effect is to confer on ≡≡ADD2≡ the functional property of being
the function that adds 2, so that evaluation of ≡≡(ADD2 5)≡ would then be %37%*.
The usual style for developing large LISP programs is to write a large number
of such function definitions, stored in one or more files.  Such a program may
be loaded into a LISP processor simply by having the ≡≡top-level≡ program (see
the section on input-output) read the relevant files, using the
≡≡(PRINT (EVAL (READ)))≡ loop to evaluate each definition in the file.  At an
appropriate stage in the development of the program, the definitions are
compiled.

≡≡ARRAY≡.  Arrays are provided for in LISP, but are not in general treated as
first-class citizens (see the section on abstract objects under Mathematics of
LISP).  A %33%*x%35%*x%37%* array called ≡≡A≡ may be declared at run time with
≡≡(ARRAY A T 3 5 7)≡.  This constitutes both a request for storage and the
association of the symbol %3A%* with the array.  The first argument must be a
symbol and is the array name.  The second argument may be either %3T%* if
arbitrary S-expressions are to be stored in the array, or ≡≡FIXNUM≡ or ≡≡FLONUM≡
if the array elements are to be all fixnums (integers) or all flonums (reals)
respectively.  (This is one place where types do intrude into the language,
in contrast to the rule that simple variables may be bound to any type.)  The
remaining aguments give the size of each dimension of the array.  Every
dimension starts from %30%*.  Thus ≡≡(A 0 0 0)≡ and ≡≡(A 2 4 6)≡ represent extremal
elements of the array declared above.

≡≡STORE≡.  Array assignments are performed using ≡≡STORE≡.  To change the value of,
say, ≡≡cat[3]≡ to %35%*, evaluate ≡≡(STORE (CAT 3) 5)≡.  In general ≡≡(STORE (a b ... z) v)≡
will change the value of ≡≡a[bλ↑,...,zλ↑]≡ to %3v%*λ↑.  To refer to the contents of an
array element, e.g. ≡≡a[2,0,5]≡, use ≡≡(A 2 0 5)≡.

≡≡PROG≡.  A series of operations, each to be performed for its effect on the
state, can be formed into a LISP program using ≡≡PROG≡.  The general form of
this construct is ≡≡(PROG a b ... z)≡.  The argument %3a%* is a list of symbols
(the ⊗⊗local⊗ variables) and the remaining arguments are either non-atomic LISP
programs which are to be evaluated in the order they appear, or symbols which
are not evaluated but serve as labels or ⊗⊗tags⊗ of points between one argument of
the ≡≡PROG≡ and the next.  For example, ≡≡(PROG (X Y) (SETQ X 1) (SETQ Y 3))≡
is a program that declares %3X%* and %3Y%* to be local variables and sets them to
%31%* and %33%* respectively.  (However it does nothing further of interest and is
an uninteresting program.)

≡≡GO≡.  If the form ≡≡(GO L)≡ is evaluated in the course of evaluating the
arguments of a ≡≡PROG≡ construct, the evaluation of the remainder of the present
argument is abandoned entirely and evaluation is resumed immediately following
the occurrence of %3L%* as an argument to ≡≡PROG≡.  For example, 
≡≡(PROG (X) (SETQ X 0) L (STORE (A X) 0) (SETQ X (ADD1X)) (GO L))≡ 
will set all elements of the array %3A%* to %30%* before illegally attempting to step
beyond the array bounds.


	Implementations vary on whether it is permissible to ≡≡GO≡ to a label
in another ≡≡PROG≡.  For example ≡≡(PROG () L (PROG () (GO L)))≡ would result
in an infinite loop in INTERLISP and an error report in LISP 1.5 or MACLISP.

≡≡RETURN≡.  Normally the value of ≡≡(PROG a ... z)≡ is ≡≡NIL≡.  However, if
≡≡(RETURN x)≡ is evaluated, all further processing of ≡≡(PROG a ... z)≡ is
abandoned and its value is then %3x%*λ↑ rather than ≡≡NIL≡.
.next page
.once center
INPUT-OUTPUT

	LISP communicates with the outside world primarily via the operations 
≡≡print≡ and ≡≡read≡.  Evaluating ≡≡(PRINT a)≡ will result in a suitable
representation of ≡≡aλ↑≡ being printed, starting with a new line, on whatever
output device has been selected by the user.  Normally this is the user's
console.  Other devices may be selected; the means for doing so vary widely
between implementations.

	The variant ≡≡prin1≡ (≡≡prin2≡ in INTERLISP) differs from ≡≡print≡ only in
that it does not start with a new line but rather where the last printout
stopped.  

	Evaluation of ≡≡(TERPRI)≡ results in the output device terminating the
current print line and going to a fresh line, by typing carriage-return
line-feed.

	The operation ≡≡read≡ is used for reading in S-expressions from the
user's console, or from a nominated device.  Thus if ≡≡(READ)≡ is evaluated and
the user then types in "≡≡(PLUS X Y)≡", the value returned by ≡≡(READ)≡  will
be the list ≡≡(PLUS X Y)≡.

	An important property of ≡≡read≡ is that if the output resulting from
evaluating ≡≡(PRINT a)≡ is read back in using ≡≡(READ)≡, the value of ≡≡(READ)≡
will be ≡≡aλ↑≡.

	The program that grants a user at a console access to LISP is called
LISP's ≡≡top-level≡.  In principle ≡≡top-level≡ is:
.example
	(PROG      ()
        L       (PRINT (EVAL (READ)))
                (TERPRI)     ;≡to give the user a fresh line to type on≡≡
                (GO L)).
.end
	In practice there is also  provision for handling errors resulting from
poorly formed input to ≡≡read≡, such as the illegal "≡≡(A . B . C)≡", or
errors arising during evaluation of the input such as unbound variables being
evaluated, non-numeric arguments to numeric functions, etc.

	When the input and output devices are both the user's console,
LISP may be used interactively.  The user may type "≡≡(PLUS 1 2)≡",
and the value of ≡≡(READ)≡ will then be ≡≡(PLUS 1 2)≡.  After
evaluation it becomes ≡≡3≡, which is then printed on the console.

	When the input is taken from some other device such as disk,
magnetic tape, or paper tape, the same top-level routine may still be
used.  A file containing function definitions may be read in by
≡≡top-level≡, supplying a mechanism for loading a user's program
prior to running it.

.next page
.once center
MISCELLANEA

⊗⊗The EXPR-EXPR distinction⊗.  So far we have assumed that the evaluation of a
form ≡≡(F X1 ... Xn)≡ always begins with the evaluation of ≡≡X1≡ through ≡≡Xn≡,
with the exceptions of ≡≡COND≡ and ≡≡QUOTE≡.  For some functions however, such
evaluation often does nothing more than strip off a ≡≡QUOTE≡ from some quoted
S-expression.  The pseudo-function ≡≡setq≡ provides a good example of this.

	The mechanisms for dealing with such functions vary between
implementations.  In LISP 1.5 it depends on how the ⊗⊗car⊗ of a form (call the
form F) is interpreted as a function.  If the ⊗⊗car⊗ of F is an atom, then the
property list of that atom is searched for a functional property.  For
conventional functions the indicator is EXPR.  For functions whose arguments
are not to be evaluated, it is FEXPR.  The FEXPR property must be a function of
one or two arguments.  If it is a function of one argument, then that function
is applied to the ⊗⊗cdr⊗ of F.  If it is a function of two arguments it is applied
to the ⊗⊗cdr⊗ of F and the current environment.

	A similar concept to ≡≡fexpr≡ is ≡≡macro≡, the two differences being that
the macro's argument is bound not to the list of unevaluated arguments but to
the whole form, and the result of evaluating the macro body is then itself
evaluated.  For example ≡≡second≡ may be defined to be the macro with body
≡≡(LAMBDA (X) (LIST 'CAR (LIST 'CDR (CONS 'CDR (CDR X)))))≡  When ≡≡(SECOND a)≡
is evaluated, the macro gives ≡≡(CAR (CDR a))≡ which is then given to ≡≡eval≡
again. While the effect of this can be achieved using a FEXPR, the real
difference shows up in compilation.  The compiler expands the macro before
compiling it.  Thus as far as the quality of compiled code is concerned,
writing ≡≡(SECOND x)≡ is indistinguishable from writing ≡≡(CAR (CDR x))≡. 
Whatever special treatment the compiler affords the latter in the way of
optimization is also afforded the former.  For example, ≡≡CAR≡ and ≡≡CDR≡ are
invariably compiled "in-line" rather than via a subroutine call, saving
considerably on the overhead associated with subroutine calls.  Thus macros
provide a way of defining useful "functions" in LISP while preserving the
efficiency attainable by writing out what the macro generates explicitly.

⊗⊗LISP data-types today⊗.  Saying that LISP is a language whose data-types are
primarily lists considerably understates the situation.  We have already seen
quite a remarkable collection of data-types in our discussion of LISP's facets.
For completeness we collect them together here.  Few languages offer as
complete a collection, particularly if one takes into account which of them are
first-class citizens according to the definition in the section on Mathematics
behind LISP.  Only property lists and arrays are not true first-class citizens
on all implementations of LISP, though they are on MACLISP.
.begin nofill;group
	numbers		integers (unlimited size), reals
	booleans		T and NIL (for false)
	symbols		serving double duty as strings and variables
	lists			for which LISP is best known
	property lists	various applications, e.g. PASCAL's records
	arrays		unrestricted as to type or dimension
	function(al)s		using LAMBDA and APPLY
	programs		using EVAL and QUOTE
.end
 On LISP 1.6/UCI-LISP and MACLISP, bit-wise parallel and shift operations are
provided, so that we may include as well
.begin nofill;
	bit vectors		various applications, e.g. PASCAL's sets
.end
⊗⊗LISP's syntax⊗.  Skin deep, LISP differs from all other languages in its use of
what has been called "Cambridge Polish" notation for programs, "Cambridge"
referring to the location of MIT.  The notation remains controversial; its
users for the most part swear by it, while its non-users cite it as a major
objection to the language.  Common virtues cited in its defense are that the
structure of the program is plainly visible so that when one wants to
manipulate programs as lists it is easy to see what list operations are called
for; that it is easy to parse mechanically; that structural ambiguity in the
language definition can never arise; and that users do not need to memorize a
complex set of grammatical rules.  Common objections, all human engineering
issues, are that the syntax is unfamiliar; too many parentheses are required;
the rule about placement of the function name is overly rigid; that the
notation by and large is unnecessarily long-winded; and that in situations
where pretty-printing a given LISP program is not possible, due to software or
hardware limitations, determining the structure of an expression may not be
easy without resorting to pencil and paper.

	None of these pros and cons can be dismissed out of hand.  It may take
a fairly detailed knowledge of the area of application and the users before one
can assign appropriate weights to these reasons, to decide the overall impact
of the notation.  For example, one application of LISP is as an intermediate
language in an interface between say a natural language program and a
data-base, an eminently suitable application for LISP.  Since the intermediate
language is "not touched by human hands" none of the human-engineering based
objections carry the slightest weight.  In its widest application, to problems
in Artificial Intelligence, many of its users become very satisfied with the
notation after a period of settling in.  On the other hand, in a high school
programming class, where the time for the student to become facile with the
syntax of the language is critically short, the additional time it takes to
become accustomed to thinking in Polish notation, as opposed to relying on well
developed responses to familiar expressions such as "x+y<z," may not be
affordable.

	This is of course a sword-of-Damocles situation.  It is not necessary
to agonize over these questions.  A quite straightforward-to-implement option
is to have a "front-end," an interface between the notation of heart's desire
and the LISP language itself.  Then one chooses the notation to suit the
application.  The notations CLISP, MLISP, and CGOL are offered on the
respective implementations INTERLISP, LISP 1.6/UCI-LISP, and MACLISP, and all
supply to one degree or another an ALGOL-like syntax.
.next page
.once center
THE MATHEMATICS BEHIND LISP

	LISP borrows more heavily than any previous language from the
elegant yet powerful concepts that had evolved in the world of mathematical
logic in the last hundred years.  These concepts included:

	Abstract objects
	The pairing function as a universal data constructor
	Functions defined by lambda-abstraction
	Recursively defined functions

⊗⊗Abstract objects⊗.  The concept of an abstract object is at least as old as the
concept of a number divorced from its representation.  Essentially all
high level programming languages embody the notion of abstract
numbers, whether or not they attempt to hide the representation by
forbidding representation-dependent operations (e.g. shifts and
Boolean operations on a binary machine).  They do this by allowing the
programmer to forget about the representation of the number if he
chooses and to imagine that he is manipulating only the abstractions. 
The test for whether complete liberation from the representation has
been attained is whether the object in question can be passed around
inside the computer in all possible ways.  An abstract object can:

 1.  be assigned to some variable;
 2.  be passed as an actual parameter of some procedure or function;
 3.  be returned as the value of some function.

	For example, in ALGOL, integers, reals and Booleans satisfy
all three tests and so are truly abstract objects.  (To judge from the
programming styles apparent in the early years of CACM's algorithms
department, the idea of Booleans as abstract objects was clearly a
difficult thing for people to come to grips with, and many programmers
continued to use two numbers, %30%* and 1 usually, even when it was clear
that a Boolean was the appropriate data type.)  On the other hand, ALGOL
arrays fail tests 1 and 3, as do functions.  This restriction on arrays and
functions forced a clumsy programming style in ALGOL, as in most high level
programming languages of ALGOL's day.  For example, the programmer could not
define a function that took as arguments two matrices and returned as value
their product.  Instead he had to write a procedure that took as arguments
three matrices, and arrange for the procedure to store the product of the
first two arguments into the third.  To accomplish this required a further
complication involving the distinction between parameters called-by-value
and parameters called-by-name, so that the programmer had to have the third
argument called-by-name.  (To add insult to injury, in most implementations
he also had to call the first two arguments by name to avoid pointless
inefficiency in making copies of the arrays, a problem caused by the
difficulty of the compiler's telling whether a copy is necessary.)

	These tests were not spelled out until as late as 1968 in
connection with the programming language POP-2 developed by R.
Popplestone at the University of Edinburgh.  Popplestone applied these
three criteria together with a fourth (that objects always be testable for
equality) to identify what he called "first-class citizens" for which the
four tests constituted their "bill of rights."  (For the fourth test
Popplestone got around the problem of testing equality of complicated
objects like functions and circular lists by changing the usual notion of
equality to that of LISP's ≡≡eq≡ predicate.  This is inconsistent with the
standard notion of equality of two abstract objects, for which no effective
implementation is possible in general for functions, and we have therefore
omitted Popplestone's fourth test.)  It is clear in hindsight that even
though these conditions had not been made explicit at the time LISP was
conceived, LISP nevertheless met them for its S-expressions, thus becoming the
first programming language to treat complex data structures as first-class citizens.

	The only other language in wide use to embody this principle
extensively is APL.  In APL arrays are treated as objects, to be operated on
like any other object.  They may be passed as arguments to functions, and
returned as the value of a function.   They may also be assigned to variables. 
Thus APL arrays can be seen to pass the first-class-citizen test.

⊗⊗Pairing⊗.  The power of the pairing function (LISP's ≡≡cons≡) was recognized in
mathematics half a century before LISP was developed.  In the introduction we
simply postulated ≡≡car≡ and ≡≡cdr≡ without further ado; however, if we wished to be
truly mathematically pedantic, we might more properly begin by observing the
basic property of ≡≡cons≡, that if ≡≡a.b = c.d≡ then ≡≡a=c≡ and ≡≡b=d≡.  That is, ≡≡a.b≡ is an
object from which in principle %3a%* and %3b%* can be reconstructed.  Contrast this
with %3+%*, where ≡≡a+b = c+d≡ does not necessarily imply either ≡≡a=c≡ or ≡≡b=d≡, whence
given the number ≡≡4+5≡ (i.e. %39%*) it is not possible to say for certain whether
this was arrived at as the sum of %34%* and %35%*, %33%* and %36%* or any other combination. 
The importance of this "information-preserving" property of ≡≡cons≡ is that it
leads to abstract objects that can function as memories holding two independent
objects.  The apparent limitation to two objects disappears when it is realized
that if ≡≡a.(b.c) = d.(e.f)≡ then ≡≡a=d≡ and ≡≡b.c=e.f≡, whence ≡≡b=e≡ and ≡≡c=f≡.  So by
using ≡≡cons≡ twice we can construct a three-memory object.  Continuing in this
way we can in principle build indefinitely large memories as abstract objects. 
This of course is all obvious without this pedantry, but it is of interest to
note that we were able to say this much about ≡≡cons≡ without having to mention
the "memory accessing" functions ≡≡car≡ and ≡≡cdr≡.

⊗⊗λ-abstraction⊗.  In 1936 A. Church introduced the notion of \-defined
functions.  23 years later McCarthy adapted Church's ≡≡\≡-notation to LISP,
generalizing it to several variables.  This gave the programmer the power to
create, as abstract objects, new functions from old, just as in Church's
system.

R⊗⊗Recursively defined functions⊗
Programming via recursively defined functions
had been developed to a fine art in the world of mathematical logic by the time
it made its debut into the world of actual computers as a feature of LISP. 
"Recursive functions" eventually came in logic to refer to any functions that
could be computed, but the name derived from the favorite way of programming
such functions, namely via recursive definitions.

			 PROGRAM VERIFICATION IN LISP

	Because of pure LISP's use of ideas from mathematical logic, it is very
easy to prove a variety of properties of pure LISP programs.  Let us illustrate
this by proving that if ≡≡append≡ is defined as 

  "≡≡a@b = [null[a] ⊃ b, T ⊃ car[a].[cdr[a]@b]]≡,"

then ≡≡a@[b@c] = [a@b]@c≡ for all lists %3a%* and all objects %3b%* and %3c%* (whether
S-expressions or not).  That is, ≡≡append≡ defined in this way is an ⊗⊗associative⊗
operation.

	In our proof it will be convenient to decompose conditionals into their
respective cases.  In the definition of ≡≡append≡, either the list %3a%* is
≡≡NIL≡, in which case the definition simplifies to≡≡

           NIL@b = b                                      ≡(1)

or the list %3a%* is non-≡≡empty≡ in which case we may write %3a%* as ≡≡x.y≡
and the definition simplifies to≡≡

         [x.y]@b = x.[y@b]                                ≡(2).

        We shall prove this associativity result first of all for the special
case when %3a%* is the empty list ≡≡NIL.

         a@[b@c] = NIL@[b@c]                    (a=NIL)
                 = b@c                          (by (1))
                 = [NIL@b]@c                    (by (1))
                 = [a@b]@c                      (a=NIL).  ≡(3)

        Now let us prove it for any list %3a%* of length 1, which we can always
write as ≡≡x.NIL.

         a@[b@c] = [x.NIL]@[b@c]
                 = x.[NIL@[b@c]]                (by (2))
                 = x.[[NIL@b]@c]                (by (3))
                 = [x.[NIL@b]]@c                (by (2))
                 = [[x.NIL]@b]@c                (by (2))
                 = [a@b]@c                                ≡(4)

	Notice how we made use of the result (%33%*) for the case ≡≡a=NIL≡.  This
technique allows us to move on to lists of length %32%*, which we can express as
≡≡x.y≡ where %3y%* is a list of length %31%*.  If we carry out the above proof with
≡≡NIL≡ replaced by %3y%* and the third line attributed to (%34%*) instead of (%33%*),
we will then have proved the result for all lists of length %32%*.

	It should be clear now that, having proved the result for lists of
length %3n%*, where %3n%* is any non-negative integer, we can immediately use
the fact in a proof for lists of length ≡≡n+1≡, just by expressing the list
%3a%* of length ≡≡n+1≡ as ≡≡x.y≡ where %3y%* is of length %3n%* and using the
same proof over again.  In this way we can go on to prove the result for lists
of any length we care to name.  In fact, if we take the infinite set of all
such results, then we are entitled to infer from that set that the result holds
for all lists.  This step in the reasoning, from the infinitely many results,
(one for each length of list) to the single result for all lists, is called the
⊗⊗omega rule⊗.  In general, if you have proved the infinitely many facts P≡≡[0]≡,
P≡≡[1]≡, P≡≡[2]≡,≡≡...≡ (the antecedents) then you may infer the fact P≡≡[n]≡
(the consequent) where %3n%* is not constrained to be any particular number. 
In our case P≡≡[n]≡ was "for all lists %3a%* of length %3n%*, ≡≡a@[b@c] = [a@b]@c≡." 

	In practice of course one never gets around to applying the omega rule
because one never finishes generating all of its antecedents.  Such ⊗⊗infinitary⊗
rules are of little real use.  What is needed is a means of looking into the
future, so to speak, and predicting that with infinitely much time one could
prove all of the antecedents.  One of the simplest reasons for being sure of
such a thing is having a proof of P[%30%*], and also a proof of P≡≡[n+1]≡
that appeals to P[%3n%*].  A proof step based on this reason is said to use the
⊗⊗induction rule⊗.

	One can imagine a little engine that, once running and left alone, will
continue to run forever.  This corresponds to the part of the proof step that
establishes P[≡≡n+1≡] provided P[%3n%*] holds.  This part is called the ⊗⊗inductive step⊗,
and P[%3n%*] the ⊗⊗inductive hypothesis⊗.  In order to get this engine to run
forever, it first has to be started.  This corresponds to the part that
established P[%30%*], called the ⊗⊗basis⊗ for the induction.

	The above proof depended on the structure of the data, in particular
the length of a list.  Such an inductive proof is said to use ⊗⊗structural induction⊗.
This is in contrast to a proof that appeals to the number of steps
of a computation, which is said to use ⊗⊗computational induction⊗.

				IMPLEMENTATION

	The degree of difficulty of implementing a programming language
depends largely on the extent of the difference between the implemented
language and the implementation language.  When the latter is a typical machine
language, the issues are for the most part the same issues encountered in
implementing any high level language.  Thus they do not need special attention
here.  What does require some care is the implementation of those language
features peculiar to LISP.

	The discussion of this implementation will not depend on the details of
the implementation language.  We assume that the reader interested in the
implementation issue has some familiarity with the organization of conventional
computers.  Our computer is a machine that has several general-purpose
registers capable of holding pointers to individual words of main memory, and
those words themselves are capable of holding pointers (whether one or two
is immaterial - if one then two words per ≡≡cons≡ cell will be needed.)

	We shall treat only that part of LISP that is side-effect free, the
part we have been calling "denotational" as opposed to process oriented.  This
permits simplifications in the overall design that permit a better appreciation
of what is involved in implementing at least this much of LISP.  We shall also
ignore numbers.  We assume that the behavior of incorrect programs is left
unspecified, so that no error checking is required.  The proper treatment of
these issues is not sufficiently LISP-dependent to warrant special treatment
here.  

	We depart from LISP in one important respect.  We shall treat the
function position of a form as requiring evaluation to get the function it
denotes.  This simplifies the implementation of ≡≡eval≡ slightly, at the expense
of not permitting simultaneous use of ≡≡PLUS≡ and other function names as a
variable and the name of a function.

	The objects we need to implement are: S-expressions, the primitive
functions ≡≡cons≡, ≡≡car≡, ≡≡cdr≡, and ≡≡atom≡, environments, the function ≡≡eval≡
(in which is embodied the LISP ⊗⊗interpreter⊗), and functions ≡≡read≡ and ≡≡print≡. 
Let us deal with these in turn.

	We shall represent all S-expressions as pointers to storage locations. 
Each symbol will be represented as a pointer to a property list whose ≡≡car≡ is
itself a special value reserved for this purpose alone.  The value will never
appear in a list except as the ≡≡car≡ of a property list.  This gives us one way
to implement the predicate ≡≡atom≡, with the advantage that we do not need a
separate area of memory from that reserved for ≡≡cons≡ cells, simplifying our
storage management algorithm.  ⊗⊗Cons⊗ cells will occupy all available memory.  We
assume that each ≡≡cons≡ cell occupies sufficient memory to hold two pointers. 
On the Digital Equipment Corporation PDP10, a machine whose architecture is
well suited to the efficient implementation of LISP, the two halves of a 36-bit
word are used for one ≡≡cons≡ cell.  

	To begin with we shall simplify the implementation of ≡≡cons≡ by
assuming that storage once claimed by ≡≡cons≡ is never claimed again.  The class
of problems that can be handled without reclaiming abandoned ≡≡cons≡ cells is
less than if reclaiming, or ⊗⊗garbage collection⊗, is performed when storage is
exhausted.  Nevertheless a usable system for small problems (e.g. class
exercises) can be built in this way.

	A system variable (one not available to the LISP user) called ≡≡AVAIL≡
is initially set to point to the beginning of the memory holding the ≡≡cons≡
cells.  The function ≡≡cons≡ stores its arguments in the two consecutive parts of
memory pointed to by ≡≡AVAIL≡, increments ≡≡AVAIL≡ to point to a fresh ≡≡cons≡
cell in readiness for the next call to ≡≡cons≡, and returns as value the
pre-incrementation value of ≡≡AVAIL≡.

	The functions ≡≡car≡ and ≡≡cdr≡ simply access the respective halves
of the location pointed to by their argument.  The PDP10 has halfword data
moving instructions, and compiled code for each of these two functions requires
only a single instruction, resulting in some array problems being solved as
efficiently with lists as with arrays, contrary to the popular image of list
processing as inherently slower than array processing when fixed-size arrays
are being operated on. 

	The predicate ≡≡atom≡ checks that the car of the location pointed to
by its argument contains the special atom.  

	We now need a representation for environments, which define bindings
for atoms.  This is easily accomplished by representing the binding of variable
%3x%* to value %3v%* as the pair ≡≡x.v≡, and an environment as a list of such
pairs.  Such a representation of an environment is called an association list
or ⊗⊗a-list⊗.

	Now things become more difficult.  The function ≡≡eval≡ requires a
fair amount of programming.  The function implemented must have the following
properties.≡≡

1.  eval[v,[v.x].e] = x          ≡where %3v%* is a variable, %3e%* an environment≡≡
2.  eval[v,[w.x].e] = eval[v,e]  ≡where %3v%* and %3w%* are distinct variables≡≡
3.  eval[(COND (a↓1 b↓1) ... (a↓n b↓n)), e]
                    = eval[b↓l,e] ≡where %3l%* is the least %3i%* such that≡≡
                                 eval[a↓i,e]≡ is non-null≡≡
4.  eval['a,e]      = a
5.  eval[(FUNCTION f),e]
                    = (FUNARG f e)
6.  eval[(f a↓1 ... a↓n),e]        ≡where %3f%* represents primitive %3F%*≡≡
                    = F[eval[a↓1,e],...,eval[a↓n,e]]
7.  eval[(f a↓1 ... a↓n),e]        ≡where %3f%* is a non-primitive symbol≡≡
                    = apply[eval[f,e],(eval[a↓1,e] ... eval[a↓n,e]),e]
8.  eval[(l a↓1 ... a↓n),e]        ≡where %3l%* is a list≡≡
                    = apply[l,(eval[a↓1,e] ... eval[a↓n,e]),e]≡

where the auxiliary function ≡≡apply≡ is≡≡

9.  apply[(LAMBDA v b),a,e]
                    = eval[b,bind[v,a,e]]
10. apply[(LABEL f l),a,e]
                    = apply[l,a,[f.(LABEL f l)].e]
11. apply[(FUNARG f d),a,e]
                    = apply[f,a,d]≡.

where the auxiliary function ≡≡bind≡ is≡≡

12. bind[NIL,NIL,e] = e
13. bind[v.w,a.b,e] = (v.a).bind[w,b,e]≡

	Equations 1 and 2 provide for looking up the value of a variable %3v%* in
the environment %3e%*.  They need not be implemented recursively - in fact
they should be implemented as a loop which searches for %3v%*.  No end test is
needed for this loop unless errors are to be dealt with, since reaching the
null environment indicates an unbound variable, an error condition.  Of course
a complete implementation of LISP will include extensive error checking.

	Equation 3 involves a loop in which each clause's first element is
evaluated by a recursive call to ≡≡eval≡.  When a non-null value is
encountered, the second element of the clause should then be evaluated.  Since
no further processing is called for after that, this final call to ≡≡eval≡
should not take the form of a subroutine call but merely a jump.  This avoids
wasting return-address space on the stack.

	Equation 4 is trivial.

	Equation 5 needs some explanation.  The idea is that if we are asked
for the value of ≡≡\x.y≡ in an environment where %3y%* is %31%*, the answer ought
to ≡≡\x.1≡.  This answer is a function, not a number.  In LISP, asking for the
value of ≡≡\x.y≡ means evaluating ≡≡(FUNCTION (LAMBDA (X) Y))≡.  This could
be taken to mean that the result should be ≡≡(LAMBDA (X) 1)≡ and indeed this
object would give the correct result.  However, it would also be expensive to
produce, particular if the lambda-expression was particularly large.  Worse, it
would mean that the lambda-expression could not be compiled.  These are the
reasons motivating the use of environments in the first place.  Hence instead
what is produced is ≡≡(FUNARG (LAMBDA (X) Y) e)≡ where "≡≡FUNARG≡" indicates that
this is a function with an associated environment, namely %3e%*.

	If none of equations 1 through 5 are relevant, the arguments should now
be evaluated in environment %3e%*.  Their values may be put wherever convenient.
They could be formed into a list, as implied by the definition of ≡≡apply≡. 
However the usual strategy is to put them in fast registers and (if all
registers are used up) to put the overflow in a list or in words of main memory
set aside for such a situation.

	Equation 6 requires testing whether the ≡≡car≡ of the form is one of
the symbols ≡≡CAR≡, ≡≡CDR≡, ≡≡CONS≡, or ≡ATOM≡.  If so then the appropriate
function should be called.  Except for ≡≡cons≡ when garbage collection is provided
for, these functions are so simple that they may as well be "open-coded" in the
interpreter, that is, written out in full instead of being called by a
subroutine jump.

	If none of equations 1 through 6 apply and the car of the form is an
atom, it should be evaluated, disposing of the difference between equations 7
and 8.  

	This brings us to equations 9 through 11.  If the first argument to ≡≡apply≡
has as its ≡≡car≡ ≡≡LAMBDA≡ then the arguments should be bound to the variables named
by the ≡≡cadr≡ of the first argument and control can go straight to ≡≡eval≡ (no
subroutine jump required).  If the ≡≡car≡ is ≡≡LABEL≡ then the ≡≡cadr≡ is bound to the
whole first argument (not an expensive operation since only the pointer to
that argument is involved), the first argument is replaced by its ≡≡caddr≡, and
control returns to the point where we tested whether the ≡≡car≡ of the first
argument was ≡≡LAMBDA≡, again not a recursive call.  If the ≡≡car≡ is
≡≡FUNARG≡ then we need simply reset the environment to that called for by the
≡≡caddr≡, reset the first argument to its ≡≡cadr≡, and return to ≡≡apply≡
just as was done for ≡≡LABEL≡.

	Equations 12 and 13 implement binding of a list of arguments to a list of
variables.  Though we have called for recursion here, a non-recursive solution
should be adopted.

	This completes the description of the implementation of ≡≡eval≡. 
Before completing the system with the implementations of ≡≡print≡ and
≡≡read≡, let us consider two difficult but important issues related to
binding.  Both of them have caused much confusion during the development of
LISP, and proponents of wholesale verification of all optimization tricks used
in implementing languages efficiently will find grist for their mills here.

⊗⊗Deep and shallow binding⊗.  Equations 1 and 2 implement what is known as "deep"
binding, so called because a given variable may have its binding arbitrarily
deep in the environment, requiring a long time to search the environment for
its value.  An alternative technique, called "shallow" binding, associates the
current value of the variable more directly with that variable by putting it
either on the property list of the variable under the indicator ≡≡VALUE≡, or in
a special "value" cell associated with that atom.  Whenever a variable is
bound, its current value is put on a stack of values (a single stack for all
variables) and the new value is then associated with the atom.  Evaluation of a
variable takes a small fixed amount of time with this scheme.

	Eventually variables need to be restored to their old values,
corresponding in the case of the above implementation of ≡≡eval≡ to returning
from a recursive call to ≡≡eval≡ and restoring the environment to its
pre-recursive-call value.  This entails abandoning some values, namely those
that are displaced by the old values.  This causes no problems unless those
values continue to be needed; the only way this can happen is if in returning
to the old environment a value is returned which needs the abandoned
environment for its correct interpretation.

	Consider for example ≡≡[[\y.[\x.y]][1]][1]≡.  The function ≡≡[\y.[\x.y]][1]≡ is
to be applied to %31%*.  That function should be ≡≡\x.1≡, the constant
function that always returns %31%*.  However, with the existing system of
evaluation, ≡≡eval≡ is dependent on the environment to tell that %3y%* is %31%* when
it is ready to apply ≡≡\x.y≡.  This is the role played by ≡≡FUNARG≡ in the
above implementation; it supplies ≡≡eval≡ with the environment in which %3y%* is
bound to %31%*.  With shallow binding, this environment has vanished by the time
%3y%* is evaluated.  The efficient solution of this problem is a difficult
problem, though there exist some quite good solutions, such as the INTERLISP
implementation's "spaghetti" stack.  An ideal solution remains elusive.

⊗⊗Dynamic versus static scope⊗.  The scope of a variable refers to the rules
governing how it is bound to values.  With static or lexical scope, as used in
Church's ≡≡\≡-calculus, occurrences of variables are associated with the closest
occurrence of a corresponding "≡≡\≡" within whose body they appear.  Once the
corresponding lambda-expression has been applied to a value, all associated
occurrences of variables are bound to that value for good.  On the other hand,
dynamic binding, used in LISP, relies on the notion of the process of
evaluation, and also on the use of a single environment that changes in the
course of evaluation.  The value of a given occurrence of a variable depends on
that variable's binding in the environment at the time the given occurrence was
evaluated, not at the time the binding was performed.  Since it can happen that
a variable may be bound and rebound many times in the course of evaluation, the
value of a given occurrence of that variable may depend on when the variable is
evaluated.

	To see how this can make a difference, consider this expression.

  ≡≡"[\x.[αf.\y.[y=0⊃x,[\x.f[y-1]][0]]][1]][1]"≡.

Its equivalent S-expression is

  ≡≡((LAMBDA (X)
            ((LABEL F
                    (LAMBDA (Y)
                            (COND ((ZEROP Y) X)
                                  (T ((LAMBDA (X) (F (SUB1 Y)))
                                       0)))))
              1))
     1)≡

	If we use static scope then this expression is fairly readily seen to
denote %31%*.  The occurrence of "%3x%*" must be associated with the outer "≡≡\x≡"
rather than the inner, not being in the scope of the inner as the expression
stands.  Hence as soon as we apply ≡≡[\x.[αf.\y.[y=0⊃x,[\x.f[y-1]][0]]][1]]≡ to
%31%*, the occurrence of "%3x%*", being associated with the outer "≡≡\x≡," is bound to
%31%*.  Now provided the recursion terminates by %3y%* going to %30%*, we will get
this binding of %3x%* as the answer.

	On the other hand, when we evaluate this using the above interpreter,
we get a computation that corresponds to the following sequence.  In it we
abbreviate ≡≡[αf.\y.[y=0⊃x,[\x.f[y-1]][0]]]≡ to F.  On the right we give the
current environment, rather than using the copy rule, with most recent bindings
appearing on the left.≡≡

[\x.F[1]][1]
F[1]                                       x=1
[\y.[y=0⊃x,[\x.F[y-1]][0]]][1]             x=1
[y=0⊃x,[\x.F[y-1]][0]]                     y=1,x=1
[1=0⊃x,[\x.F[y-1]][0]]                     y=1,x=1
[\x.F[y-1]][0]                             y=1,x=1
F[y-1]                                     x=0,y=1,x=1
F[0]                                       x=0,y=1,x=1
[\y.[y=0⊃x,[\x.F[y-1]][0]]][0]             x=0,y=1,x=1
[y=0⊃x,[\x.F[y-1]][0]]                     y=0,x=0,y=1,x=1
[0=0⊃x,[\x.F[y-1]][0]]                     y=0,x=0,y=1,x=1
x                                          y=0,x=0,y=1,x=1
0                                          y=0,x=0,y=1,x=1≡

	With static binding we would not have got %30%* but %31%*.  The difference
can be attributed to the fact that with dynamic binding, a variable may not be
evaluated until long after it has been bound, by which time it may have been
rebound on account of some other usage of "≡≡\x≡" that, from the static point
of view, has nothing to do with the occurrence now being evaluated.

	While we have phrased this argument in terms of ≡≡label≡, it is just
as sound if we rewrite everything using ≡≡DEFUN≡.  If this is done it will be
observed that in the recursive definition of %3f%*, %3x%* occurs free.  When
function definitions have no free occurrences of variables, the difference
between static and dynamic scope vanishes.

	It should be pointed out that the difficulty arising with shallow
binding, though similar in flavor to the static-dynamic problem, is in fact a
different problem arising only when functions are returned as values.  The
dynamic-static difference can arise without having to return a function, or
even pass one as a parameter.

≡≡PRINT≡ ⊗⊗and⊗ ≡≡READ≡.  The implementation of ≡≡PRINT≡ is an ideal LISP
exercise.  It assumes the existence of a primitive character printer, and also
a way of accessing the ≡≡printname≡ or ≡≡pname≡ of an atom, namely the string
of characters we use to identify it outside the machine.

	≡≡READ≡ requires more work than ≡≡PRINT≡.  The primary reason for
this is the way our implementation represents atoms internally.  We could
represent these atoms as strings, but instead we have chosen to represent them
as pointers to property lists.  The difference is that in the latter case, to
make sure that two distinct references to the same atom also end up referring
to the same property list, we need to make the connection between the two
references at the time they are read in.  This is achieved via the use of a
hash table with "buckets."  Each printname as it is read in is mapped via a
hash function to an integer in a range of %30%* to perhaps ≡≡200≡ or so.  An array
of size ≡≡200≡ is then accessed at the position mapped to, yielding a list of
all atoms that have previously mapped to that position.  This list is searched
for an occurrence of an atom with that printname.  If none is found then a new
property list is created for this atom and put on the list.  If it is found
then it (the pointer found on the list) is taken to be the atom and no further
processing is done.

	The remaining aspects of ≡≡read≡ are a fairly straightforward exercise
in LISP programming and we shall not dwell on them here.

⊗⊗Garbage collection⊗.  Our implementation of ≡≡cons≡ rather simple-mindedly
moves a pointer, ≡≡AVAIL≡, through storage until storage is exhausted.  While
this is suitable for small problems, or problems doing next to no
"≡≡cons≡ing", it fails as soon as storage is exhausted as it will be for
bigger problems.

	The original LISP solution to this problem is now a classic technique
of computer science.  The idea is, when storage is exhausted, to suspend all
LISP processing while stock is taken of which ≡≡cons≡ cells in storage may
still be in use.  Having determined these, the remainder are collected and made
available for allocation by ≡≡cons≡.  Then LISP processing is resumed.

	The "taking stock" part is called the ⊗⊗mark phase⊗ of garbage collection.
Starting from each pointer currently being manipulated or stored by the
interpreter in a register or on the interpreter's own push-down stack, all
≡≡cons≡ cells accessible by following chains of pointers are marked using a
special bit reserved in each cell for this purpose.

	The "collecting" part is called the ⊗⊗sweep phase⊗, and consists of making
one linear scan of memory during which all unmarked cells are chained together
into a list by storing pointers in their ≡≡cdr≡ halves from one free cell to the
next, and all marked cells are unmarked.

	This completes our discussion of the implementation of a small LISP
interpreter.

⊗⊗Use of interpretation⊗.  Below we discuss compilation, which permits fast
execution of LISP programs.  In view of this option what is the point of
interpretation?

	The problem with compilation is the premise on which it rests, that
only the external behavior of the program is of interest.  While valid for
production runs of a program it is invalid for development runs because in
order to understand why a program is misbehaving one generally needs to
understand what it is doing.  This can be quite awkward in compiled code, where
the connection between the instructions of the source and object versions may
require considerable effort to establish, even assuming that the programmer is
familiar with both languages.  The extent to which debugging tools can help in
this regard is the extent to which those tools in the hands of the programmer
constitute an incremental decompiler.

	The situation is considerably simpler with an interpreter.  The code
being executed is exactly the code the programmer wrote.  The processing done
by the interpreter is not distorted to gain speed, but matches what the
programmer imagined should happen.  The data can be viewed in the form the
programmer works with, rather than an efficiency-oriented encoding of it.  This
is not to imply that interfaces cannot be built that deal with these issues in
compiled code, but it is much simpler and more cost-effective in general to
achieve all this using an interpreter.

⊗⊗Compilation⊗.  Consider the time taken to execute one application of ≡≡car≡
using the above suggestion for implementing ≡≡eval≡, which we can estimate at
about twenty instructions.  This is unreasonable when one considers that, of
these twenty instructions, only one or two (depending on the machine's
instruction set) actually do useful work.  The remainder are spent determining
that ≡≡car≡ is called for and maneuvering the data into position.

	This cost is typical of emulation, which is one way to think of what
our implementation of ≡≡eval≡ is doing, namely emulating a LISP processor. 
Emulation of an abstract machine such as our hypothetical LISP  processor is
called ⊗⊗interpretation⊗, and the emulator an ⊗⊗interpreter⊗;  the interpreter is
supposed to interpret LISP programs the way a LISP processor would.

	An alternative to emulation is translation, or ⊗⊗compilation⊗ as it is
called when the target or ⊗⊗object⊗ language is the machine language of a
computer.  The ⊗⊗compiler⊗ takes a LISP object and produces a machine language
program which when run exhibits the same external characteristics as the LISP
object, though the internal behavior may be altogether different and of
interest to the user only insofar as it affects efficiency.  Compilation of
LISP objects is in practice confined to compilation of functions, which is
enough to permit the programmer to realize the major efficiencies possible with
compilation, which may be a factor of ten in speed.

	One tends to think of compilers as very-large-scale software
enterprises.  However, the size of the compiler is a function of the language
the compiler is written in, the number of features offered by the language
being compiled, and the "distance" from source to target languages, i.e. the
extent to which the target language is suited to executing programs intended
for the source machine (the abstract LISP processor in our case).  When these
three aspects are in favorable conjunction, as is the case when the compilation
is from LISP to PDP10 machine language and the compiler is itself written in
LISP (an ideal application of symbol and list processing techniques), one can
write a quite modest compiler that still permits realization of the major gains
of compilation over interpretation, possibly leaving room for less than a
factor of two improvement in speed of object code at considerably more expense
in size and speed of the compiler.  In "Recursive Programming in LISP,"
McCarthy gives a self-contained LISP compiler written with 75 lines of LISP
(expressed as M-expressions).

I⊗⊗Interfacing compiled and interpreted code⊗.
Earlier we drew a gloomy picture of
the ease of debugging compiled code, making a case for debugging only
interpreted code.  This can be inconvenient in those cases where long runs are
needed during development.  For this reason compilation is not performed as an
all or nothing task but rather is applied independently to each user-defined
function.  Even when running compiled code, the interpreter remains as part of
the system, both to accept the user's commands (which are LISP programs that are
usually not worth compiling and so need to be interpreted) and to interpret any
functions left uncompiled.  

	To accomplish this, function calls that would normally be translated as
subroutine jumps are instead translated as calls to the system in which the name
of the desired function is passed as an additional parameter.  When such a call
is executed the system checks the state of the requested function.  If the
function is uncompiled, the interpreter is called on to deal with it.  If it is
compiled, the just-executed system call in the compiled code is changed by the
system into a subroutine jump directly to the requested function.  Hence the
second time this function is requested by that part of the program, system
intervention is not required.  This technique supplies a flexible solution to
the "linking loader" problem, with linking being done at run time instead of
load time and then only when the link is actually needed.

				 APPLICATIONS

	When regularly structured data is to be manipulated, which usually
means numeric vectors and matrices, array processing machines and languages are
best suited to the task, being best able to benefit from the parallelism
possible in many such problems.  When main memory is scarce, languages with a
wide variety of data types for coping with different sizes and structures of data
may be appropriate.  Under almost any other conditions LISP is an excellent
choice of language.  Once one has accepted the "admission fee," so to speak, of
an overhead of two pointers per list element, and the lack of random access
into lists, the advantages of list processing as done in LISP begin to be
appreciated.

	One of the most significant benefits in the long run is the ease with
which the programmer can deal with complex data structures when these can be
viewed as abstract independent objects rather than as pieces of a network (see
the section on Mathematics behind LISP).  Both LISP and APL outshine the other
widely used languages in this respect, and as machine costs decrease and
programming costs increase these benefits will be felt progressively more. 
When the objects are irregularly structured, APL is not as appropriate as LISP.

	The most obvious application of LISP is language processing in its
broadest sense.  This includes natural languages, programming languages,
logical languages, and algebraic languages.  Common to all of these areas is
the need to manipulate symbolic expressions, which arise as the parse trees or
structure of the linear text that humans find most convenient to manipulate. 
Text alone is not appropriate within the computer when the computer needs to
work with the meaning or denotation of the text, because the structure of the
text is essential to is meaning.  Parsing, cheap for a human, is comparatively
expensive for a computer, and should be confined to input and output.

⊗⊗Natural language⊗.  LISP is the one programming language used regularly by those
working in the natural language processing field, though it is by no means used
exclusively by them.  The symbol-handling machinery, particularly property
lists, supplies everything required for a dictionary except morphological
processing, which must be done by the user.  The list processing capability
facilitates construction of parse trees for natural language input, and
simplifies tree processing at the semantic level during or following parsing.

	Some natural language systems implemented in LISP are Bobrow's STUDENT
program, Raphael's SIR program, the surviving version of Weizenbaum's ELIZA
program (the original was written in SLIP, a FORTRAN-based list processing
system developed by Weizenbaum), Wood's Augmented-Transition-Network (ATN)
parser and his LUNAR question-answering system for moon rocks, and Winograd's
SHRDLU, to mention only the most prominent.  

⊗⊗Programming langauges⊗.  As with natural language, both symbols and lists are
ideal data types for interpreters, compilers and optimizers.  LISP's symbol
capabilities supply all that is needed for a symbol table.  The lack of
constraints on how many properties an atom can have or what they may be means
that the symbol table can be as elaborate or as simple as the programmer wants.
Lists supply an excellent representation for parse trees or for the
intermediate language used as a halfway station between the source and
object code.

⊗⊗Logical languages⊗.  Automatic theorem provers work with logical languages.  This
was one of the original applications seen for LISP, and the LISP 1.5 manual,
though short (105 pages) as programming language manuals go, nevertheless
managed to include as an example LISP program a complete implementation of
H. Wang's decision procedure for theorems of propositional calculus.  Since
then many automatic theorem provers have been implemented in LISP.

⊗⊗Algebraic languages⊗.  The differentiation program given earlier gives a good
indication of the suitability of LISP for algebraic manipulation programs.  The
most notable such systems implemented in LISP are Moses' MACSYMA system at MIT,
Griesmer and Jenks' Scratchpad system at IBM, and Hearn's REDUCE system at the
University of Utah.

⊗⊗Other areas⊗.  As a general purpose programming language, LISP can be quite
useful when there exists a good optimizing compiler.  The incentives to
develop such a compiler have not however been strong until recently, when the
algebraic symbol processing community began to feel the need for numerical
processing at a level of efficiency offered by good optimizing FORTRAN
compilers.  (Good FORTRAN optimizers exist not so much because FORTRAN
particularly lends itself to optimization as because that is the language where
the most research effort on optimizing compilers, notably by IBM, has been
placed in the past.)  As a result of such stimulus, the LISP compiler presently
in use at MIT is now quite satisfactory in this respect (though still not as
good as the best FORTRAN compilers) and in view of LISP's considerable
superiority over FORTRAN in non-numeric areas offers even numerically-oriented
programmers an excellent language.  It is for example the language used for the
plasma physics research conducted at MIT.  Though the computational
requirements are heavily numerical, LISP appears to be more suited to the
non-numerical work involved and quite adequate for the numerical.    

.ONCE CENTER BIBLIOGRAPHY

Allen, J.  ⊗⊗Anatomy of LISP⊗.  To be published by McGraw-Hill.

Berkeley, E.C. and D.G. Bobrow.  ⊗⊗The Programming Language LISP: Its Operation and
Applications⊗. Information International, Inc., Mass., 1964.

Greenberg, B.  Notes on the Programming Language LISP.  M.I.T. Student
Information Processing Board, 1976.

McCarthy, J.M. ⊗⊗et  al.  LISP  1 Programmer's  Manual⊗ M.I.T.  Computation
Center and Research Lab. of Electronics, Cambridge, Mass., March 1960.

McCarthy, J.M. ⊗⊗et al. LISP  1.5 Programmer's Manual⊗ M.I.T.  Computation
Center and Research Lab. of Electronics, Cambridge, Mass., August 1962.

McCarthy, J.M.  Recursive Programming in LISP.  Notes, Stanford University,
1976.

Moon, D.A.  ⊗⊗MACLISP Reference Manual⊗, Revision 0. Project MAC, MIT, April 1974.

Sandewall, E.  Programming in an Interactive Environment.  The LISP Experience.
LiTH-MAT-R-76-9, Linkoeping University, Linkoeping, Sweden.

Sammet,  J.E.    ⊗⊗Programming  Languages:   History  and   Fundamentals⊗.
Prentice-Hall, Englewood Cliffs, N.J.  1969.

Teitelman, W.  ⊗⊗INTERLISP Reference Manual.⊗  Xerox Palo Alto Research Center,
Palo Alto, Calif., 1974.

Weissman, C.  ⊗⊗LISP 1.5 Primer⊗.  Dickenson Publishing Co., Belmont, Calif., 1967.
β